00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2408 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3673 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.180 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.181 The recommended git tool is: git 00:00:00.181 using credential 00000000-0000-0000-0000-000000000002 00:00:00.187 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.235 Fetching changes from the remote Git repository 00:00:00.237 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.293 Using shallow fetch with depth 1 00:00:00.293 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.293 > git --version # timeout=10 00:00:00.312 > git --version # 'git version 2.39.2' 00:00:00.312 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.326 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.326 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.701 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.714 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.729 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.729 > git config core.sparsecheckout # timeout=10 00:00:07.743 > git read-tree -mu HEAD # timeout=10 00:00:07.762 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.787 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.787 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.891 [Pipeline] Start of Pipeline 00:00:07.908 [Pipeline] library 00:00:07.909 Loading library shm_lib@master 00:00:07.910 Library shm_lib@master is cached. Copying from home. 00:00:07.927 [Pipeline] node 00:00:07.950 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:07.951 [Pipeline] { 00:00:07.962 [Pipeline] catchError 00:00:07.963 [Pipeline] { 00:00:07.974 [Pipeline] wrap 00:00:07.981 [Pipeline] { 00:00:07.987 [Pipeline] stage 00:00:07.988 [Pipeline] { (Prologue) 00:00:08.002 [Pipeline] echo 00:00:08.003 Node: VM-host-SM38 00:00:08.009 [Pipeline] cleanWs 00:00:08.018 [WS-CLEANUP] Deleting project workspace... 00:00:08.018 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.025 [WS-CLEANUP] done 00:00:08.216 [Pipeline] setCustomBuildProperty 00:00:08.284 [Pipeline] httpRequest 00:00:11.314 [Pipeline] echo 00:00:11.316 Sorcerer 10.211.164.101 is dead 00:00:11.326 [Pipeline] httpRequest 00:00:12.255 [Pipeline] echo 00:00:12.257 Sorcerer 10.211.164.101 is alive 00:00:12.267 [Pipeline] retry 00:00:12.269 [Pipeline] { 00:00:12.284 [Pipeline] httpRequest 00:00:12.289 HttpMethod: GET 00:00:12.290 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.290 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.316 Response Code: HTTP/1.1 200 OK 00:00:12.317 Success: Status code 200 is in the accepted range: 200,404 00:00:12.317 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:29.928 [Pipeline] } 00:00:29.951 [Pipeline] // retry 00:00:29.960 [Pipeline] sh 00:00:30.253 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:30.274 [Pipeline] httpRequest 00:00:30.685 [Pipeline] echo 00:00:30.687 Sorcerer 10.211.164.101 is alive 00:00:30.697 [Pipeline] retry 00:00:30.699 [Pipeline] { 00:00:30.713 [Pipeline] httpRequest 00:00:30.719 HttpMethod: GET 00:00:30.719 URL: http://10.211.164.101/packages/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:30.720 Sending request to url: http://10.211.164.101/packages/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:30.725 Response Code: HTTP/1.1 200 OK 00:00:30.726 Success: Status code 200 is in the accepted range: 200,404 00:00:30.727 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:01:39.595 [Pipeline] } 00:01:39.613 [Pipeline] // retry 00:01:39.620 [Pipeline] sh 00:01:39.905 + tar --no-same-owner -xf spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:01:42.467 [Pipeline] sh 00:01:42.753 + git -C spdk log --oneline -n5 00:01:42.753 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:01:42.753 01a2c4855 bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:01:42.753 9094b9600 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:01:42.753 2e10c84c8 nvmf: Expose DIF type of namespace to host again 00:01:42.753 38b931b23 nvmf: Set bdev_ext_io_opts::dif_check_flags_exclude_mask for read/write 00:01:42.773 [Pipeline] withCredentials 00:01:42.784 > git --version # timeout=10 00:01:42.800 > git --version # 'git version 2.39.2' 00:01:42.813 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:42.815 [Pipeline] { 00:01:42.825 [Pipeline] retry 00:01:42.827 [Pipeline] { 00:01:42.846 [Pipeline] sh 00:01:43.133 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:43.147 [Pipeline] } 00:01:43.167 [Pipeline] // retry 00:01:43.174 [Pipeline] } 00:01:43.190 [Pipeline] // withCredentials 00:01:43.203 [Pipeline] httpRequest 00:01:43.657 [Pipeline] echo 00:01:43.660 Sorcerer 10.211.164.101 is alive 00:01:43.672 [Pipeline] retry 00:01:43.674 [Pipeline] { 00:01:43.689 [Pipeline] httpRequest 00:01:43.694 HttpMethod: GET 00:01:43.695 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:43.696 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:43.714 Response Code: HTTP/1.1 200 OK 00:01:43.714 Success: Status code 200 is in the accepted range: 200,404 00:01:43.715 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:56.599 [Pipeline] } 00:01:56.617 [Pipeline] // retry 00:01:56.626 [Pipeline] sh 00:01:56.933 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:58.325 [Pipeline] sh 00:01:58.609 + git -C dpdk log --oneline -n5 00:01:58.609 caf0f5d395 version: 22.11.4 00:01:58.609 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:58.609 dc9c799c7d vhost: fix missing spinlock unlock 00:01:58.609 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:58.609 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:58.628 [Pipeline] writeFile 00:01:58.644 [Pipeline] sh 00:01:58.929 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:58.943 [Pipeline] sh 00:01:59.225 + cat autorun-spdk.conf 00:01:59.225 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:59.225 SPDK_TEST_NVME=1 00:01:59.225 SPDK_TEST_FTL=1 00:01:59.225 SPDK_TEST_ISAL=1 00:01:59.225 SPDK_RUN_ASAN=1 00:01:59.225 SPDK_RUN_UBSAN=1 00:01:59.225 SPDK_TEST_XNVME=1 00:01:59.225 SPDK_TEST_NVME_FDP=1 00:01:59.225 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:59.225 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:59.225 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:59.232 RUN_NIGHTLY=1 00:01:59.234 [Pipeline] } 00:01:59.247 [Pipeline] // stage 00:01:59.262 [Pipeline] stage 00:01:59.264 [Pipeline] { (Run VM) 00:01:59.277 [Pipeline] sh 00:01:59.582 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:59.582 + echo 'Start stage prepare_nvme.sh' 00:01:59.582 Start stage prepare_nvme.sh 00:01:59.582 + [[ -n 3 ]] 00:01:59.582 + disk_prefix=ex3 00:01:59.582 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:59.582 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:59.582 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:59.582 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:59.582 ++ SPDK_TEST_NVME=1 00:01:59.582 ++ SPDK_TEST_FTL=1 00:01:59.582 ++ SPDK_TEST_ISAL=1 00:01:59.582 ++ SPDK_RUN_ASAN=1 00:01:59.582 ++ SPDK_RUN_UBSAN=1 00:01:59.582 ++ SPDK_TEST_XNVME=1 00:01:59.582 ++ SPDK_TEST_NVME_FDP=1 00:01:59.582 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:59.582 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:59.582 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:59.582 ++ RUN_NIGHTLY=1 00:01:59.582 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:59.582 + nvme_files=() 00:01:59.582 + declare -A nvme_files 00:01:59.582 + backend_dir=/var/lib/libvirt/images/backends 00:01:59.582 + nvme_files['nvme.img']=5G 00:01:59.582 + nvme_files['nvme-cmb.img']=5G 00:01:59.582 + nvme_files['nvme-multi0.img']=4G 00:01:59.582 + nvme_files['nvme-multi1.img']=4G 00:01:59.582 + nvme_files['nvme-multi2.img']=4G 00:01:59.582 + nvme_files['nvme-openstack.img']=8G 00:01:59.582 + nvme_files['nvme-zns.img']=5G 00:01:59.582 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:59.582 + (( SPDK_TEST_FTL == 1 )) 00:01:59.582 + nvme_files["nvme-ftl.img"]=6G 00:01:59.582 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:59.582 + nvme_files["nvme-fdp.img"]=1G 00:01:59.582 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:59.582 + for nvme in "${!nvme_files[@]}" 00:01:59.582 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:01:59.582 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:59.582 + for nvme in "${!nvme_files[@]}" 00:01:59.582 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-ftl.img -s 6G 00:01:59.582 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:59.582 + for nvme in "${!nvme_files[@]}" 00:01:59.582 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:01:59.582 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:59.582 + for nvme in "${!nvme_files[@]}" 00:01:59.582 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:01:59.582 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:59.582 + for nvme in "${!nvme_files[@]}" 00:01:59.582 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:02:00.154 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:00.154 + for nvme in "${!nvme_files[@]}" 00:02:00.154 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:02:00.154 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:00.415 + for nvme in "${!nvme_files[@]}" 00:02:00.415 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:02:00.415 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:00.415 + for nvme in "${!nvme_files[@]}" 00:02:00.415 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-fdp.img -s 1G 00:02:00.415 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:02:00.415 + for nvme in "${!nvme_files[@]}" 00:02:00.415 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:02:00.984 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:00.984 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:02:00.984 + echo 'End stage prepare_nvme.sh' 00:02:00.984 End stage prepare_nvme.sh 00:02:00.996 [Pipeline] sh 00:02:01.279 + DISTRO=fedora39 00:02:01.279 + CPUS=10 00:02:01.279 + RAM=12288 00:02:01.279 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:01.279 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex3-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:02:01.279 00:02:01.279 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:02:01.279 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:02:01.279 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:02:01.279 HELP=0 00:02:01.279 DRY_RUN=0 00:02:01.279 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme-ftl.img,/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,/var/lib/libvirt/images/backends/ex3-nvme-fdp.img, 00:02:01.279 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:02:01.279 NVME_AUTO_CREATE=0 00:02:01.279 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,, 00:02:01.279 NVME_CMB=,,,, 00:02:01.279 NVME_PMR=,,,, 00:02:01.279 NVME_ZNS=,,,, 00:02:01.279 NVME_MS=true,,,, 00:02:01.279 NVME_FDP=,,,on, 00:02:01.279 SPDK_VAGRANT_DISTRO=fedora39 00:02:01.279 SPDK_VAGRANT_VMCPU=10 00:02:01.279 SPDK_VAGRANT_VMRAM=12288 00:02:01.279 SPDK_VAGRANT_PROVIDER=libvirt 00:02:01.279 SPDK_VAGRANT_HTTP_PROXY= 00:02:01.279 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:01.279 SPDK_OPENSTACK_NETWORK=0 00:02:01.279 VAGRANT_PACKAGE_BOX=0 00:02:01.279 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:01.279 FORCE_DISTRO=true 00:02:01.279 VAGRANT_BOX_VERSION= 00:02:01.279 EXTRA_VAGRANTFILES= 00:02:01.279 NIC_MODEL=e1000 00:02:01.279 00:02:01.279 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:02:01.279 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:02:03.824 Bringing machine 'default' up with 'libvirt' provider... 00:02:04.086 ==> default: Creating image (snapshot of base box volume). 00:02:04.347 ==> default: Creating domain with the following settings... 00:02:04.347 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732743267_c6c9821d2c1313ef359c 00:02:04.347 ==> default: -- Domain type: kvm 00:02:04.347 ==> default: -- Cpus: 10 00:02:04.347 ==> default: -- Feature: acpi 00:02:04.347 ==> default: -- Feature: apic 00:02:04.347 ==> default: -- Feature: pae 00:02:04.347 ==> default: -- Memory: 12288M 00:02:04.347 ==> default: -- Memory Backing: hugepages: 00:02:04.347 ==> default: -- Management MAC: 00:02:04.347 ==> default: -- Loader: 00:02:04.347 ==> default: -- Nvram: 00:02:04.347 ==> default: -- Base box: spdk/fedora39 00:02:04.347 ==> default: -- Storage pool: default 00:02:04.347 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732743267_c6c9821d2c1313ef359c.img (20G) 00:02:04.347 ==> default: -- Volume Cache: default 00:02:04.347 ==> default: -- Kernel: 00:02:04.347 ==> default: -- Initrd: 00:02:04.347 ==> default: -- Graphics Type: vnc 00:02:04.347 ==> default: -- Graphics Port: -1 00:02:04.347 ==> default: -- Graphics IP: 127.0.0.1 00:02:04.347 ==> default: -- Graphics Password: Not defined 00:02:04.347 ==> default: -- Video Type: cirrus 00:02:04.347 ==> default: -- Video VRAM: 9216 00:02:04.347 ==> default: -- Sound Type: 00:02:04.347 ==> default: -- Keymap: en-us 00:02:04.347 ==> default: -- TPM Path: 00:02:04.347 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:04.347 ==> default: -- Command line args: 00:02:04.347 ==> default: -> value=-device, 00:02:04.347 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:04.348 ==> default: -> value=-drive, 00:02:04.348 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:02:04.348 ==> default: -> value=-device, 00:02:04.348 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:02:04.348 ==> default: -> value=-device, 00:02:04.348 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:04.348 ==> default: -> value=-drive, 00:02:04.348 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-1-drive0, 00:02:04.348 ==> default: -> value=-device, 00:02:04.348 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:04.348 ==> default: -> value=-device, 00:02:04.348 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:02:04.348 ==> default: -> value=-drive, 00:02:04.348 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:02:04.348 ==> default: -> value=-device, 00:02:04.348 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:04.348 ==> default: -> value=-drive, 00:02:04.348 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:02:04.348 ==> default: -> value=-device, 00:02:04.348 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:04.348 ==> default: -> value=-drive, 00:02:04.348 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:02:04.348 ==> default: -> value=-device, 00:02:04.348 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:04.348 ==> default: -> value=-device, 00:02:04.348 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:02:04.348 ==> default: -> value=-device, 00:02:04.348 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:02:04.348 ==> default: -> value=-drive, 00:02:04.348 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:02:04.348 ==> default: -> value=-device, 00:02:04.348 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:04.609 ==> default: Creating shared folders metadata... 00:02:04.609 ==> default: Starting domain. 00:02:05.994 ==> default: Waiting for domain to get an IP address... 00:02:27.963 ==> default: Waiting for SSH to become available... 00:02:27.963 ==> default: Configuring and enabling network interfaces... 00:02:29.881 default: SSH address: 192.168.121.25:22 00:02:29.881 default: SSH username: vagrant 00:02:29.881 default: SSH auth method: private key 00:02:31.795 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:39.941 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:45.237 ==> default: Mounting SSHFS shared folder... 00:02:47.784 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:47.784 ==> default: Checking Mount.. 00:02:48.728 ==> default: Folder Successfully Mounted! 00:02:48.728 00:02:48.728 SUCCESS! 00:02:48.728 00:02:48.728 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:48.728 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:48.728 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:48.728 00:02:48.739 [Pipeline] } 00:02:48.754 [Pipeline] // stage 00:02:48.765 [Pipeline] dir 00:02:48.765 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:48.767 [Pipeline] { 00:02:48.780 [Pipeline] catchError 00:02:48.782 [Pipeline] { 00:02:48.795 [Pipeline] sh 00:02:49.079 + vagrant ssh-config --host vagrant 00:02:49.079 + sed -ne '/^Host/,$p' 00:02:49.079 + tee ssh_conf 00:02:51.627 Host vagrant 00:02:51.627 HostName 192.168.121.25 00:02:51.627 User vagrant 00:02:51.627 Port 22 00:02:51.627 UserKnownHostsFile /dev/null 00:02:51.627 StrictHostKeyChecking no 00:02:51.627 PasswordAuthentication no 00:02:51.627 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:51.627 IdentitiesOnly yes 00:02:51.627 LogLevel FATAL 00:02:51.627 ForwardAgent yes 00:02:51.627 ForwardX11 yes 00:02:51.627 00:02:51.642 [Pipeline] withEnv 00:02:51.645 [Pipeline] { 00:02:51.659 [Pipeline] sh 00:02:51.941 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:51.941 source /etc/os-release 00:02:51.941 [[ -e /image.version ]] && img=$(< /image.version) 00:02:51.941 # Minimal, systemd-like check. 00:02:51.941 if [[ -e /.dockerenv ]]; then 00:02:51.941 # Clear garbage from the node'\''s name: 00:02:51.941 # agt-er_autotest_547-896 -> autotest_547-896 00:02:51.941 # $HOSTNAME is the actual container id 00:02:51.941 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:51.941 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:51.941 # We can assume this is a mount from a host where container is running, 00:02:51.941 # so fetch its hostname to easily identify the target swarm worker. 00:02:51.941 container="$(< /etc/hostname) ($agent)" 00:02:51.941 else 00:02:51.941 # Fallback 00:02:51.941 container=$agent 00:02:51.941 fi 00:02:51.941 fi 00:02:51.941 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:51.941 ' 00:02:52.215 [Pipeline] } 00:02:52.231 [Pipeline] // withEnv 00:02:52.240 [Pipeline] setCustomBuildProperty 00:02:52.255 [Pipeline] stage 00:02:52.258 [Pipeline] { (Tests) 00:02:52.275 [Pipeline] sh 00:02:52.558 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:52.837 [Pipeline] sh 00:02:53.121 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:53.400 [Pipeline] timeout 00:02:53.400 Timeout set to expire in 50 min 00:02:53.402 [Pipeline] { 00:02:53.416 [Pipeline] sh 00:02:53.700 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:54.282 HEAD is now at 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:02:54.296 [Pipeline] sh 00:02:54.580 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:54.858 [Pipeline] sh 00:02:55.144 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:55.423 [Pipeline] sh 00:02:55.707 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:55.970 ++ readlink -f spdk_repo 00:02:55.970 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:55.970 + [[ -n /home/vagrant/spdk_repo ]] 00:02:55.970 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:55.970 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:55.970 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:55.970 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:55.970 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:55.970 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:55.970 + cd /home/vagrant/spdk_repo 00:02:55.970 + source /etc/os-release 00:02:55.970 ++ NAME='Fedora Linux' 00:02:55.970 ++ VERSION='39 (Cloud Edition)' 00:02:55.970 ++ ID=fedora 00:02:55.970 ++ VERSION_ID=39 00:02:55.970 ++ VERSION_CODENAME= 00:02:55.970 ++ PLATFORM_ID=platform:f39 00:02:55.970 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:55.970 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:55.970 ++ LOGO=fedora-logo-icon 00:02:55.970 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:55.970 ++ HOME_URL=https://fedoraproject.org/ 00:02:55.970 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:55.970 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:55.970 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:55.970 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:55.970 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:55.970 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:55.970 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:55.970 ++ SUPPORT_END=2024-11-12 00:02:55.970 ++ VARIANT='Cloud Edition' 00:02:55.970 ++ VARIANT_ID=cloud 00:02:55.970 + uname -a 00:02:55.970 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:55.970 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:56.241 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:56.503 Hugepages 00:02:56.503 node hugesize free / total 00:02:56.503 node0 1048576kB 0 / 0 00:02:56.503 node0 2048kB 0 / 0 00:02:56.503 00:02:56.503 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:56.503 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:56.503 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:56.503 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:02:56.503 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:56.503 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:56.503 + rm -f /tmp/spdk-ld-path 00:02:56.503 + source autorun-spdk.conf 00:02:56.503 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:56.503 ++ SPDK_TEST_NVME=1 00:02:56.503 ++ SPDK_TEST_FTL=1 00:02:56.503 ++ SPDK_TEST_ISAL=1 00:02:56.503 ++ SPDK_RUN_ASAN=1 00:02:56.503 ++ SPDK_RUN_UBSAN=1 00:02:56.503 ++ SPDK_TEST_XNVME=1 00:02:56.503 ++ SPDK_TEST_NVME_FDP=1 00:02:56.503 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:56.503 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:56.503 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:56.503 ++ RUN_NIGHTLY=1 00:02:56.503 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:56.503 + [[ -n '' ]] 00:02:56.503 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:56.503 + for M in /var/spdk/build-*-manifest.txt 00:02:56.503 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:56.503 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:56.503 + for M in /var/spdk/build-*-manifest.txt 00:02:56.503 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:56.503 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:56.764 + for M in /var/spdk/build-*-manifest.txt 00:02:56.764 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:56.764 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:56.764 ++ uname 00:02:56.764 + [[ Linux == \L\i\n\u\x ]] 00:02:56.764 + sudo dmesg -T 00:02:56.764 + sudo dmesg --clear 00:02:56.764 + dmesg_pid=5763 00:02:56.764 + [[ Fedora Linux == FreeBSD ]] 00:02:56.764 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:56.764 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:56.764 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:56.764 + [[ -x /usr/src/fio-static/fio ]] 00:02:56.764 + sudo dmesg -Tw 00:02:56.764 + export FIO_BIN=/usr/src/fio-static/fio 00:02:56.764 + FIO_BIN=/usr/src/fio-static/fio 00:02:56.764 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:56.764 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:56.764 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:56.764 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:56.764 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:56.764 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:56.764 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:56.764 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:56.764 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:56.764 21:35:19 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:56.764 21:35:19 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:56.764 21:35:19 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:56.764 21:35:19 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:56.764 21:35:19 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:56.764 21:35:19 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:56.764 21:35:19 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:56.764 21:35:19 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:56.764 21:35:19 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:56.764 21:35:19 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:56.764 21:35:19 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:56.764 21:35:19 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:56.764 21:35:19 -- spdk_repo/autorun-spdk.conf@11 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:56.764 21:35:19 -- spdk_repo/autorun-spdk.conf@12 -- $ RUN_NIGHTLY=1 00:02:56.764 21:35:19 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:56.764 21:35:19 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:56.764 21:35:19 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:56.764 21:35:19 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:56.764 21:35:19 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:56.764 21:35:19 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:56.764 21:35:19 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:56.764 21:35:19 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:56.764 21:35:19 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:56.765 21:35:19 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:56.765 21:35:19 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:56.765 21:35:19 -- paths/export.sh@5 -- $ export PATH 00:02:56.765 21:35:19 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:56.765 21:35:19 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:56.765 21:35:19 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:56.765 21:35:19 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732743319.XXXXXX 00:02:56.765 21:35:19 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732743319.ssaapq 00:02:56.765 21:35:19 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:56.765 21:35:19 -- common/autobuild_common.sh@499 -- $ '[' -n v22.11.4 ']' 00:02:56.765 21:35:19 -- common/autobuild_common.sh@500 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:56.765 21:35:19 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:56.765 21:35:19 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:56.765 21:35:19 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:56.765 21:35:19 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:56.765 21:35:19 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:56.765 21:35:19 -- common/autotest_common.sh@10 -- $ set +x 00:02:56.765 21:35:19 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:56.765 21:35:19 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:56.765 21:35:19 -- pm/common@17 -- $ local monitor 00:02:56.765 21:35:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:56.765 21:35:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:56.765 21:35:19 -- pm/common@25 -- $ sleep 1 00:02:56.765 21:35:19 -- pm/common@21 -- $ date +%s 00:02:56.765 21:35:19 -- pm/common@21 -- $ date +%s 00:02:56.765 21:35:19 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732743319 00:02:56.765 21:35:19 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732743319 00:02:57.025 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732743319_collect-vmstat.pm.log 00:02:57.025 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732743319_collect-cpu-load.pm.log 00:02:57.966 21:35:20 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:57.966 21:35:20 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:57.966 21:35:20 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:57.966 21:35:20 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:57.966 21:35:20 -- spdk/autobuild.sh@16 -- $ date -u 00:02:57.966 Wed Nov 27 09:35:20 PM UTC 2024 00:02:57.966 21:35:20 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:57.966 v25.01-pre-276-g35cd3e84d 00:02:57.966 21:35:20 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:57.967 21:35:20 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:57.967 21:35:20 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:57.967 21:35:20 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:57.967 21:35:20 -- common/autotest_common.sh@10 -- $ set +x 00:02:57.967 ************************************ 00:02:57.967 START TEST asan 00:02:57.967 ************************************ 00:02:57.967 using asan 00:02:57.967 21:35:20 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:57.967 00:02:57.967 real 0m0.000s 00:02:57.967 user 0m0.000s 00:02:57.967 sys 0m0.000s 00:02:57.967 ************************************ 00:02:57.967 END TEST asan 00:02:57.967 ************************************ 00:02:57.967 21:35:20 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:57.967 21:35:20 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:57.967 21:35:20 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:57.967 21:35:20 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:57.967 21:35:20 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:57.967 21:35:20 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:57.967 21:35:20 -- common/autotest_common.sh@10 -- $ set +x 00:02:57.967 ************************************ 00:02:57.967 START TEST ubsan 00:02:57.967 ************************************ 00:02:57.967 using ubsan 00:02:57.967 21:35:20 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:57.967 00:02:57.967 real 0m0.000s 00:02:57.967 user 0m0.000s 00:02:57.967 sys 0m0.000s 00:02:57.967 ************************************ 00:02:57.967 END TEST ubsan 00:02:57.967 ************************************ 00:02:57.967 21:35:20 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:57.967 21:35:20 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:57.967 21:35:20 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:57.967 21:35:20 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:57.967 21:35:20 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:57.967 21:35:20 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:57.967 21:35:20 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:57.967 21:35:20 -- common/autotest_common.sh@10 -- $ set +x 00:02:57.967 ************************************ 00:02:57.967 START TEST build_native_dpdk 00:02:57.967 ************************************ 00:02:57.967 21:35:21 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:57.967 21:35:21 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:57.967 21:35:21 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:57.967 21:35:21 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:57.967 21:35:21 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:57.967 21:35:21 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:57.967 21:35:21 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:57.967 21:35:21 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:57.967 21:35:21 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:57.967 21:35:21 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:57.967 21:35:21 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:57.967 21:35:21 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:57.967 21:35:21 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:57.967 21:35:21 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:57.967 21:35:21 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:57.967 21:35:21 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:57.967 21:35:21 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:57.967 21:35:21 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:57.967 21:35:21 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:57.967 21:35:21 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:57.967 21:35:21 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:57.967 caf0f5d395 version: 22.11.4 00:02:57.967 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:57.967 dc9c799c7d vhost: fix missing spinlock unlock 00:02:57.967 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:57.967 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:57.967 21:35:21 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:57.967 21:35:21 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:57.967 21:35:21 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:57.967 21:35:21 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:57.967 21:35:21 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:57.967 21:35:21 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:57.967 21:35:21 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:57.967 21:35:21 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:57.967 21:35:21 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:57.967 21:35:21 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:02:57.967 21:35:21 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:02:57.967 21:35:21 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:57.967 21:35:21 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:57.967 21:35:21 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:02:57.967 21:35:21 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:57.967 21:35:21 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:02:57.967 21:35:21 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:02:57.967 21:35:21 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 21.11.0 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:57.967 21:35:21 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:02:57.967 patching file config/rte_config.h 00:02:57.967 Hunk #1 succeeded at 60 (offset 1 line). 00:02:57.967 21:35:21 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 22.11.4 24.07.0 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:57.967 21:35:21 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:57.968 21:35:21 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:57.968 21:35:21 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:57.968 21:35:21 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1 00:02:57.968 patching file lib/pcapng/rte_pcapng.c 00:02:57.968 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:57.968 21:35:21 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 22.11.4 24.07.0 00:02:57.968 21:35:21 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:02:57.968 21:35:21 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:57.968 21:35:21 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:57.968 21:35:21 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:57.968 21:35:21 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:57.968 21:35:21 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:57.968 21:35:21 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:57.968 21:35:21 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:57.968 21:35:21 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:57.968 21:35:21 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:57.968 21:35:21 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:57.968 21:35:21 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:57.968 21:35:21 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:57.968 21:35:21 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:57.968 21:35:21 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:57.968 21:35:21 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:57.968 21:35:21 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:57.968 21:35:21 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:57.968 21:35:21 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:57.968 21:35:21 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:57.968 21:35:21 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:57.968 21:35:21 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:57.968 21:35:21 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:57.968 21:35:21 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:57.968 21:35:21 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:57.968 21:35:21 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:57.968 21:35:21 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:57.968 21:35:21 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:57.968 21:35:21 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:02:57.968 21:35:21 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:58.229 21:35:21 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:02:58.229 21:35:21 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:02:58.229 21:35:21 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:03:02.435 The Meson build system 00:03:02.435 Version: 1.5.0 00:03:02.435 Source dir: /home/vagrant/spdk_repo/dpdk 00:03:02.435 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:03:02.435 Build type: native build 00:03:02.435 Program cat found: YES (/usr/bin/cat) 00:03:02.435 Project name: DPDK 00:03:02.435 Project version: 22.11.4 00:03:02.435 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:02.435 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:02.435 Host machine cpu family: x86_64 00:03:02.435 Host machine cpu: x86_64 00:03:02.435 Message: ## Building in Developer Mode ## 00:03:02.435 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:02.435 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:03:02.435 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:03:02.435 Program objdump found: YES (/usr/bin/objdump) 00:03:02.435 Program python3 found: YES (/usr/bin/python3) 00:03:02.435 Program cat found: YES (/usr/bin/cat) 00:03:02.435 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:03:02.435 Checking for size of "void *" : 8 00:03:02.435 Checking for size of "void *" : 8 (cached) 00:03:02.435 Library m found: YES 00:03:02.435 Library numa found: YES 00:03:02.435 Has header "numaif.h" : YES 00:03:02.435 Library fdt found: NO 00:03:02.435 Library execinfo found: NO 00:03:02.435 Has header "execinfo.h" : YES 00:03:02.435 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:02.435 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:02.435 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:02.435 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:02.435 Run-time dependency openssl found: YES 3.1.1 00:03:02.435 Run-time dependency libpcap found: YES 1.10.4 00:03:02.435 Has header "pcap.h" with dependency libpcap: YES 00:03:02.435 Compiler for C supports arguments -Wcast-qual: YES 00:03:02.435 Compiler for C supports arguments -Wdeprecated: YES 00:03:02.435 Compiler for C supports arguments -Wformat: YES 00:03:02.435 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:02.435 Compiler for C supports arguments -Wformat-security: NO 00:03:02.435 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:02.435 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:02.435 Compiler for C supports arguments -Wnested-externs: YES 00:03:02.435 Compiler for C supports arguments -Wold-style-definition: YES 00:03:02.435 Compiler for C supports arguments -Wpointer-arith: YES 00:03:02.435 Compiler for C supports arguments -Wsign-compare: YES 00:03:02.435 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:02.435 Compiler for C supports arguments -Wundef: YES 00:03:02.435 Compiler for C supports arguments -Wwrite-strings: YES 00:03:02.435 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:02.435 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:02.435 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:02.435 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:02.435 Compiler for C supports arguments -mavx512f: YES 00:03:02.435 Checking if "AVX512 checking" compiles: YES 00:03:02.435 Fetching value of define "__SSE4_2__" : 1 00:03:02.435 Fetching value of define "__AES__" : 1 00:03:02.435 Fetching value of define "__AVX__" : 1 00:03:02.435 Fetching value of define "__AVX2__" : 1 00:03:02.436 Fetching value of define "__AVX512BW__" : 1 00:03:02.436 Fetching value of define "__AVX512CD__" : 1 00:03:02.436 Fetching value of define "__AVX512DQ__" : 1 00:03:02.436 Fetching value of define "__AVX512F__" : 1 00:03:02.436 Fetching value of define "__AVX512VL__" : 1 00:03:02.436 Fetching value of define "__PCLMUL__" : 1 00:03:02.436 Fetching value of define "__RDRND__" : 1 00:03:02.436 Fetching value of define "__RDSEED__" : 1 00:03:02.436 Fetching value of define "__VPCLMULQDQ__" : 1 00:03:02.436 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:02.436 Message: lib/kvargs: Defining dependency "kvargs" 00:03:02.436 Message: lib/telemetry: Defining dependency "telemetry" 00:03:02.436 Checking for function "getentropy" : YES 00:03:02.436 Message: lib/eal: Defining dependency "eal" 00:03:02.436 Message: lib/ring: Defining dependency "ring" 00:03:02.436 Message: lib/rcu: Defining dependency "rcu" 00:03:02.436 Message: lib/mempool: Defining dependency "mempool" 00:03:02.436 Message: lib/mbuf: Defining dependency "mbuf" 00:03:02.436 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:02.436 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:02.436 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:02.436 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:02.436 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:02.436 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:03:02.436 Compiler for C supports arguments -mpclmul: YES 00:03:02.436 Compiler for C supports arguments -maes: YES 00:03:02.436 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:02.436 Compiler for C supports arguments -mavx512bw: YES 00:03:02.436 Compiler for C supports arguments -mavx512dq: YES 00:03:02.436 Compiler for C supports arguments -mavx512vl: YES 00:03:02.436 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:02.436 Compiler for C supports arguments -mavx2: YES 00:03:02.436 Compiler for C supports arguments -mavx: YES 00:03:02.436 Message: lib/net: Defining dependency "net" 00:03:02.436 Message: lib/meter: Defining dependency "meter" 00:03:02.436 Message: lib/ethdev: Defining dependency "ethdev" 00:03:02.436 Message: lib/pci: Defining dependency "pci" 00:03:02.436 Message: lib/cmdline: Defining dependency "cmdline" 00:03:02.436 Message: lib/metrics: Defining dependency "metrics" 00:03:02.436 Message: lib/hash: Defining dependency "hash" 00:03:02.436 Message: lib/timer: Defining dependency "timer" 00:03:02.436 Fetching value of define "__AVX2__" : 1 (cached) 00:03:02.436 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:02.436 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:02.436 Fetching value of define "__AVX512CD__" : 1 (cached) 00:03:02.436 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:02.436 Message: lib/acl: Defining dependency "acl" 00:03:02.436 Message: lib/bbdev: Defining dependency "bbdev" 00:03:02.436 Message: lib/bitratestats: Defining dependency "bitratestats" 00:03:02.436 Run-time dependency libelf found: YES 0.191 00:03:02.436 Message: lib/bpf: Defining dependency "bpf" 00:03:02.436 Message: lib/cfgfile: Defining dependency "cfgfile" 00:03:02.436 Message: lib/compressdev: Defining dependency "compressdev" 00:03:02.436 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:02.436 Message: lib/distributor: Defining dependency "distributor" 00:03:02.436 Message: lib/efd: Defining dependency "efd" 00:03:02.436 Message: lib/eventdev: Defining dependency "eventdev" 00:03:02.436 Message: lib/gpudev: Defining dependency "gpudev" 00:03:02.436 Message: lib/gro: Defining dependency "gro" 00:03:02.436 Message: lib/gso: Defining dependency "gso" 00:03:02.436 Message: lib/ip_frag: Defining dependency "ip_frag" 00:03:02.436 Message: lib/jobstats: Defining dependency "jobstats" 00:03:02.436 Message: lib/latencystats: Defining dependency "latencystats" 00:03:02.436 Message: lib/lpm: Defining dependency "lpm" 00:03:02.436 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:02.436 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:02.436 Fetching value of define "__AVX512IFMA__" : 1 00:03:02.436 Message: lib/member: Defining dependency "member" 00:03:02.436 Message: lib/pcapng: Defining dependency "pcapng" 00:03:02.436 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:02.436 Message: lib/power: Defining dependency "power" 00:03:02.436 Message: lib/rawdev: Defining dependency "rawdev" 00:03:02.436 Message: lib/regexdev: Defining dependency "regexdev" 00:03:02.436 Message: lib/dmadev: Defining dependency "dmadev" 00:03:02.436 Message: lib/rib: Defining dependency "rib" 00:03:02.436 Message: lib/reorder: Defining dependency "reorder" 00:03:02.436 Message: lib/sched: Defining dependency "sched" 00:03:02.436 Message: lib/security: Defining dependency "security" 00:03:02.436 Message: lib/stack: Defining dependency "stack" 00:03:02.436 Has header "linux/userfaultfd.h" : YES 00:03:02.436 Message: lib/vhost: Defining dependency "vhost" 00:03:02.436 Message: lib/ipsec: Defining dependency "ipsec" 00:03:02.436 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:02.436 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:02.436 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:02.436 Message: lib/fib: Defining dependency "fib" 00:03:02.436 Message: lib/port: Defining dependency "port" 00:03:02.436 Message: lib/pdump: Defining dependency "pdump" 00:03:02.436 Message: lib/table: Defining dependency "table" 00:03:02.436 Message: lib/pipeline: Defining dependency "pipeline" 00:03:02.436 Message: lib/graph: Defining dependency "graph" 00:03:02.436 Message: lib/node: Defining dependency "node" 00:03:02.436 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:02.436 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:02.436 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:02.436 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:02.436 Compiler for C supports arguments -Wno-sign-compare: YES 00:03:02.436 Compiler for C supports arguments -Wno-unused-value: YES 00:03:02.436 Compiler for C supports arguments -Wno-format: YES 00:03:02.436 Compiler for C supports arguments -Wno-format-security: YES 00:03:02.436 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:03:02.436 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:02.436 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:03:02.436 Compiler for C supports arguments -Wno-unused-parameter: YES 00:03:03.380 Fetching value of define "__AVX2__" : 1 (cached) 00:03:03.380 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:03.380 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:03.380 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:03.380 Compiler for C supports arguments -mavx512bw: YES (cached) 00:03:03.380 Compiler for C supports arguments -march=skylake-avx512: YES 00:03:03.380 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:03:03.380 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:03.380 Configuring doxy-api.conf using configuration 00:03:03.380 Program sphinx-build found: NO 00:03:03.380 Configuring rte_build_config.h using configuration 00:03:03.380 Message: 00:03:03.380 ================= 00:03:03.380 Applications Enabled 00:03:03.380 ================= 00:03:03.380 00:03:03.380 apps: 00:03:03.380 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:03:03.380 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:03:03.380 test-security-perf, 00:03:03.380 00:03:03.380 Message: 00:03:03.380 ================= 00:03:03.380 Libraries Enabled 00:03:03.380 ================= 00:03:03.380 00:03:03.380 libs: 00:03:03.380 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:03:03.380 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:03:03.380 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:03:03.380 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:03:03.380 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:03:03.380 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:03:03.380 table, pipeline, graph, node, 00:03:03.380 00:03:03.380 Message: 00:03:03.380 =============== 00:03:03.380 Drivers Enabled 00:03:03.380 =============== 00:03:03.380 00:03:03.380 common: 00:03:03.380 00:03:03.380 bus: 00:03:03.380 pci, vdev, 00:03:03.380 mempool: 00:03:03.380 ring, 00:03:03.380 dma: 00:03:03.380 00:03:03.380 net: 00:03:03.380 i40e, 00:03:03.380 raw: 00:03:03.380 00:03:03.380 crypto: 00:03:03.380 00:03:03.380 compress: 00:03:03.380 00:03:03.380 regex: 00:03:03.380 00:03:03.380 vdpa: 00:03:03.380 00:03:03.380 event: 00:03:03.380 00:03:03.380 baseband: 00:03:03.380 00:03:03.380 gpu: 00:03:03.380 00:03:03.380 00:03:03.380 Message: 00:03:03.380 ================= 00:03:03.380 Content Skipped 00:03:03.380 ================= 00:03:03.380 00:03:03.380 apps: 00:03:03.380 00:03:03.380 libs: 00:03:03.380 kni: explicitly disabled via build config (deprecated lib) 00:03:03.380 flow_classify: explicitly disabled via build config (deprecated lib) 00:03:03.380 00:03:03.380 drivers: 00:03:03.380 common/cpt: not in enabled drivers build config 00:03:03.380 common/dpaax: not in enabled drivers build config 00:03:03.380 common/iavf: not in enabled drivers build config 00:03:03.380 common/idpf: not in enabled drivers build config 00:03:03.380 common/mvep: not in enabled drivers build config 00:03:03.380 common/octeontx: not in enabled drivers build config 00:03:03.380 bus/auxiliary: not in enabled drivers build config 00:03:03.380 bus/dpaa: not in enabled drivers build config 00:03:03.380 bus/fslmc: not in enabled drivers build config 00:03:03.380 bus/ifpga: not in enabled drivers build config 00:03:03.380 bus/vmbus: not in enabled drivers build config 00:03:03.380 common/cnxk: not in enabled drivers build config 00:03:03.380 common/mlx5: not in enabled drivers build config 00:03:03.380 common/qat: not in enabled drivers build config 00:03:03.380 common/sfc_efx: not in enabled drivers build config 00:03:03.380 mempool/bucket: not in enabled drivers build config 00:03:03.380 mempool/cnxk: not in enabled drivers build config 00:03:03.380 mempool/dpaa: not in enabled drivers build config 00:03:03.380 mempool/dpaa2: not in enabled drivers build config 00:03:03.380 mempool/octeontx: not in enabled drivers build config 00:03:03.380 mempool/stack: not in enabled drivers build config 00:03:03.380 dma/cnxk: not in enabled drivers build config 00:03:03.380 dma/dpaa: not in enabled drivers build config 00:03:03.380 dma/dpaa2: not in enabled drivers build config 00:03:03.380 dma/hisilicon: not in enabled drivers build config 00:03:03.380 dma/idxd: not in enabled drivers build config 00:03:03.380 dma/ioat: not in enabled drivers build config 00:03:03.380 dma/skeleton: not in enabled drivers build config 00:03:03.380 net/af_packet: not in enabled drivers build config 00:03:03.380 net/af_xdp: not in enabled drivers build config 00:03:03.380 net/ark: not in enabled drivers build config 00:03:03.380 net/atlantic: not in enabled drivers build config 00:03:03.380 net/avp: not in enabled drivers build config 00:03:03.380 net/axgbe: not in enabled drivers build config 00:03:03.380 net/bnx2x: not in enabled drivers build config 00:03:03.380 net/bnxt: not in enabled drivers build config 00:03:03.380 net/bonding: not in enabled drivers build config 00:03:03.380 net/cnxk: not in enabled drivers build config 00:03:03.380 net/cxgbe: not in enabled drivers build config 00:03:03.380 net/dpaa: not in enabled drivers build config 00:03:03.380 net/dpaa2: not in enabled drivers build config 00:03:03.380 net/e1000: not in enabled drivers build config 00:03:03.380 net/ena: not in enabled drivers build config 00:03:03.380 net/enetc: not in enabled drivers build config 00:03:03.380 net/enetfec: not in enabled drivers build config 00:03:03.380 net/enic: not in enabled drivers build config 00:03:03.380 net/failsafe: not in enabled drivers build config 00:03:03.380 net/fm10k: not in enabled drivers build config 00:03:03.380 net/gve: not in enabled drivers build config 00:03:03.380 net/hinic: not in enabled drivers build config 00:03:03.380 net/hns3: not in enabled drivers build config 00:03:03.380 net/iavf: not in enabled drivers build config 00:03:03.380 net/ice: not in enabled drivers build config 00:03:03.380 net/idpf: not in enabled drivers build config 00:03:03.380 net/igc: not in enabled drivers build config 00:03:03.380 net/ionic: not in enabled drivers build config 00:03:03.380 net/ipn3ke: not in enabled drivers build config 00:03:03.380 net/ixgbe: not in enabled drivers build config 00:03:03.380 net/kni: not in enabled drivers build config 00:03:03.380 net/liquidio: not in enabled drivers build config 00:03:03.380 net/mana: not in enabled drivers build config 00:03:03.380 net/memif: not in enabled drivers build config 00:03:03.380 net/mlx4: not in enabled drivers build config 00:03:03.380 net/mlx5: not in enabled drivers build config 00:03:03.380 net/mvneta: not in enabled drivers build config 00:03:03.380 net/mvpp2: not in enabled drivers build config 00:03:03.381 net/netvsc: not in enabled drivers build config 00:03:03.381 net/nfb: not in enabled drivers build config 00:03:03.381 net/nfp: not in enabled drivers build config 00:03:03.381 net/ngbe: not in enabled drivers build config 00:03:03.381 net/null: not in enabled drivers build config 00:03:03.381 net/octeontx: not in enabled drivers build config 00:03:03.381 net/octeon_ep: not in enabled drivers build config 00:03:03.381 net/pcap: not in enabled drivers build config 00:03:03.381 net/pfe: not in enabled drivers build config 00:03:03.381 net/qede: not in enabled drivers build config 00:03:03.381 net/ring: not in enabled drivers build config 00:03:03.381 net/sfc: not in enabled drivers build config 00:03:03.381 net/softnic: not in enabled drivers build config 00:03:03.381 net/tap: not in enabled drivers build config 00:03:03.381 net/thunderx: not in enabled drivers build config 00:03:03.381 net/txgbe: not in enabled drivers build config 00:03:03.381 net/vdev_netvsc: not in enabled drivers build config 00:03:03.381 net/vhost: not in enabled drivers build config 00:03:03.381 net/virtio: not in enabled drivers build config 00:03:03.381 net/vmxnet3: not in enabled drivers build config 00:03:03.381 raw/cnxk_bphy: not in enabled drivers build config 00:03:03.381 raw/cnxk_gpio: not in enabled drivers build config 00:03:03.381 raw/dpaa2_cmdif: not in enabled drivers build config 00:03:03.381 raw/ifpga: not in enabled drivers build config 00:03:03.381 raw/ntb: not in enabled drivers build config 00:03:03.381 raw/skeleton: not in enabled drivers build config 00:03:03.381 crypto/armv8: not in enabled drivers build config 00:03:03.381 crypto/bcmfs: not in enabled drivers build config 00:03:03.381 crypto/caam_jr: not in enabled drivers build config 00:03:03.381 crypto/ccp: not in enabled drivers build config 00:03:03.381 crypto/cnxk: not in enabled drivers build config 00:03:03.381 crypto/dpaa_sec: not in enabled drivers build config 00:03:03.381 crypto/dpaa2_sec: not in enabled drivers build config 00:03:03.381 crypto/ipsec_mb: not in enabled drivers build config 00:03:03.381 crypto/mlx5: not in enabled drivers build config 00:03:03.381 crypto/mvsam: not in enabled drivers build config 00:03:03.381 crypto/nitrox: not in enabled drivers build config 00:03:03.381 crypto/null: not in enabled drivers build config 00:03:03.381 crypto/octeontx: not in enabled drivers build config 00:03:03.381 crypto/openssl: not in enabled drivers build config 00:03:03.381 crypto/scheduler: not in enabled drivers build config 00:03:03.381 crypto/uadk: not in enabled drivers build config 00:03:03.381 crypto/virtio: not in enabled drivers build config 00:03:03.381 compress/isal: not in enabled drivers build config 00:03:03.381 compress/mlx5: not in enabled drivers build config 00:03:03.381 compress/octeontx: not in enabled drivers build config 00:03:03.381 compress/zlib: not in enabled drivers build config 00:03:03.381 regex/mlx5: not in enabled drivers build config 00:03:03.381 regex/cn9k: not in enabled drivers build config 00:03:03.381 vdpa/ifc: not in enabled drivers build config 00:03:03.381 vdpa/mlx5: not in enabled drivers build config 00:03:03.381 vdpa/sfc: not in enabled drivers build config 00:03:03.381 event/cnxk: not in enabled drivers build config 00:03:03.381 event/dlb2: not in enabled drivers build config 00:03:03.381 event/dpaa: not in enabled drivers build config 00:03:03.381 event/dpaa2: not in enabled drivers build config 00:03:03.381 event/dsw: not in enabled drivers build config 00:03:03.381 event/opdl: not in enabled drivers build config 00:03:03.381 event/skeleton: not in enabled drivers build config 00:03:03.381 event/sw: not in enabled drivers build config 00:03:03.381 event/octeontx: not in enabled drivers build config 00:03:03.381 baseband/acc: not in enabled drivers build config 00:03:03.381 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:03:03.381 baseband/fpga_lte_fec: not in enabled drivers build config 00:03:03.381 baseband/la12xx: not in enabled drivers build config 00:03:03.381 baseband/null: not in enabled drivers build config 00:03:03.381 baseband/turbo_sw: not in enabled drivers build config 00:03:03.381 gpu/cuda: not in enabled drivers build config 00:03:03.381 00:03:03.381 00:03:03.381 Build targets in project: 309 00:03:03.381 00:03:03.381 DPDK 22.11.4 00:03:03.381 00:03:03.381 User defined options 00:03:03.381 libdir : lib 00:03:03.381 prefix : /home/vagrant/spdk_repo/dpdk/build 00:03:03.381 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:03:03.381 c_link_args : 00:03:03.381 enable_docs : false 00:03:03.381 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:03:03.381 enable_kmods : false 00:03:03.381 machine : native 00:03:03.381 tests : false 00:03:03.381 00:03:03.381 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:03.381 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:03:03.381 21:35:26 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:03:03.381 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:03.381 [1/738] Generating lib/rte_kvargs_mingw with a custom command 00:03:03.381 [2/738] Generating lib/rte_telemetry_def with a custom command 00:03:03.381 [3/738] Generating lib/rte_kvargs_def with a custom command 00:03:03.381 [4/738] Generating lib/rte_telemetry_mingw with a custom command 00:03:03.641 [5/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:03.641 [6/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:03.641 [7/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:03.641 [8/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:03.641 [9/738] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:03.641 [10/738] Linking static target lib/librte_kvargs.a 00:03:03.641 [11/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:03.641 [12/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:03.641 [13/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:03.641 [14/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:03.641 [15/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:03.641 [16/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:03.641 [17/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:03.641 [18/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:03.641 [19/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:03.641 [20/738] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.900 [21/738] Linking target lib/librte_kvargs.so.23.0 00:03:03.900 [22/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:03:03.900 [23/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:03.900 [24/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:03.900 [25/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:03.900 [26/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:03.900 [27/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:03.900 [28/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:03.900 [29/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:03.900 [30/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:03.900 [31/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:03.900 [32/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:03.900 [33/738] Linking static target lib/librte_telemetry.a 00:03:03.900 [34/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:03.900 [35/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:03.900 [36/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:04.158 [37/738] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:03:04.158 [38/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:04.158 [39/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:04.158 [40/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:04.158 [41/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:04.158 [42/738] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.158 [43/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:04.159 [44/738] Linking target lib/librte_telemetry.so.23.0 00:03:04.159 [45/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:04.417 [46/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:04.417 [47/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:04.417 [48/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:04.417 [49/738] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:03:04.417 [50/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:04.417 [51/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:04.417 [52/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:04.417 [53/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:04.417 [54/738] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:04.417 [55/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:04.417 [56/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:04.417 [57/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:04.417 [58/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:04.417 [59/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:04.417 [60/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:04.417 [61/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:04.417 [62/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:04.417 [63/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:04.417 [64/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:04.417 [65/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:03:04.676 [66/738] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:04.676 [67/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:04.676 [68/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:04.676 [69/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:04.676 [70/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:04.676 [71/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:04.676 [72/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:04.676 [73/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:04.676 [74/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:04.676 [75/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:04.676 [76/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:04.676 [77/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:04.676 [78/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:04.676 [79/738] Generating lib/rte_eal_def with a custom command 00:03:04.676 [80/738] Generating lib/rte_eal_mingw with a custom command 00:03:04.676 [81/738] Generating lib/rte_ring_def with a custom command 00:03:04.676 [82/738] Generating lib/rte_ring_mingw with a custom command 00:03:04.676 [83/738] Generating lib/rte_rcu_def with a custom command 00:03:04.676 [84/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:04.676 [85/738] Generating lib/rte_rcu_mingw with a custom command 00:03:04.676 [86/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:04.934 [87/738] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:04.934 [88/738] Linking static target lib/librte_ring.a 00:03:04.934 [89/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:04.934 [90/738] Generating lib/rte_mempool_def with a custom command 00:03:04.934 [91/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:04.934 [92/738] Generating lib/rte_mempool_mingw with a custom command 00:03:04.934 [93/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:04.934 [94/738] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.192 [95/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:05.192 [96/738] Generating lib/rte_mbuf_def with a custom command 00:03:05.192 [97/738] Generating lib/rte_mbuf_mingw with a custom command 00:03:05.192 [98/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:05.192 [99/738] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:05.192 [100/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:05.192 [101/738] Linking static target lib/librte_eal.a 00:03:05.452 [102/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:05.452 [103/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:05.452 [104/738] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:05.452 [105/738] Linking static target lib/librte_rcu.a 00:03:05.452 [106/738] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:05.452 [107/738] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:05.452 [108/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:05.452 [109/738] Linking static target lib/librte_mempool.a 00:03:05.452 [110/738] Generating lib/rte_net_def with a custom command 00:03:05.452 [111/738] Generating lib/rte_net_mingw with a custom command 00:03:05.452 [112/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:05.452 [113/738] Generating lib/rte_meter_def with a custom command 00:03:05.710 [114/738] Generating lib/rte_meter_mingw with a custom command 00:03:05.710 [115/738] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:05.710 [116/738] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:05.710 [117/738] Linking static target lib/librte_meter.a 00:03:05.710 [118/738] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.710 [119/738] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:05.710 [120/738] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:05.710 [121/738] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:05.710 [122/738] Linking static target lib/librte_net.a 00:03:05.710 [123/738] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.968 [124/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:05.968 [125/738] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.968 [126/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:05.968 [127/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:05.968 [128/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:05.968 [129/738] Linking static target lib/librte_mbuf.a 00:03:05.968 [130/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:05.968 [131/738] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.226 [132/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:06.226 [133/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:06.483 [134/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:06.483 [135/738] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.483 [136/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:06.483 [137/738] Generating lib/rte_ethdev_def with a custom command 00:03:06.483 [138/738] Generating lib/rte_ethdev_mingw with a custom command 00:03:06.483 [139/738] Generating lib/rte_pci_def with a custom command 00:03:06.483 [140/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:06.483 [141/738] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:06.483 [142/738] Linking static target lib/librte_pci.a 00:03:06.483 [143/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:06.483 [144/738] Generating lib/rte_pci_mingw with a custom command 00:03:06.483 [145/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:06.483 [146/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:06.742 [147/738] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.742 [148/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:06.742 [149/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:06.742 [150/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:06.742 [151/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:06.742 [152/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:06.742 [153/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:06.742 [154/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:06.742 [155/738] Generating lib/rte_cmdline_def with a custom command 00:03:06.742 [156/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:06.742 [157/738] Generating lib/rte_cmdline_mingw with a custom command 00:03:06.742 [158/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:06.742 [159/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:06.742 [160/738] Generating lib/rte_metrics_def with a custom command 00:03:06.742 [161/738] Generating lib/rte_metrics_mingw with a custom command 00:03:07.000 [162/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:07.000 [163/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:07.000 [164/738] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:03:07.000 [165/738] Linking static target lib/librte_cmdline.a 00:03:07.000 [166/738] Generating lib/rte_hash_def with a custom command 00:03:07.000 [167/738] Generating lib/rte_hash_mingw with a custom command 00:03:07.000 [168/738] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:07.000 [169/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:07.000 [170/738] Generating lib/rte_timer_def with a custom command 00:03:07.000 [171/738] Generating lib/rte_timer_mingw with a custom command 00:03:07.259 [172/738] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:03:07.259 [173/738] Linking static target lib/librte_metrics.a 00:03:07.259 [174/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:07.259 [175/738] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:07.259 [176/738] Linking static target lib/librte_timer.a 00:03:07.518 [177/738] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.518 [178/738] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:07.518 [179/738] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:07.518 [180/738] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.518 [181/738] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.518 [182/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:07.518 [183/738] Generating lib/rte_acl_def with a custom command 00:03:07.776 [184/738] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:07.776 [185/738] Generating lib/rte_acl_mingw with a custom command 00:03:07.776 [186/738] Generating lib/rte_bbdev_def with a custom command 00:03:07.776 [187/738] Generating lib/rte_bbdev_mingw with a custom command 00:03:07.776 [188/738] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:07.776 [189/738] Generating lib/rte_bitratestats_def with a custom command 00:03:07.776 [190/738] Generating lib/rte_bitratestats_mingw with a custom command 00:03:07.776 [191/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:07.776 [192/738] Linking static target lib/librte_ethdev.a 00:03:08.035 [193/738] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:08.035 [194/738] Linking static target lib/librte_bitratestats.a 00:03:08.035 [195/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:08.035 [196/738] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:08.035 [197/738] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.292 [198/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:08.292 [199/738] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:08.292 [200/738] Linking static target lib/librte_bbdev.a 00:03:08.292 [201/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:08.551 [202/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:08.551 [203/738] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:08.551 [204/738] Linking static target lib/librte_hash.a 00:03:08.551 [205/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:08.551 [206/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:08.809 [207/738] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.809 [208/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:08.809 [209/738] Generating lib/rte_bpf_def with a custom command 00:03:08.809 [210/738] Generating lib/rte_bpf_mingw with a custom command 00:03:09.067 [211/738] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.067 [212/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:09.067 [213/738] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:09.067 [214/738] Generating lib/rte_cfgfile_def with a custom command 00:03:09.067 [215/738] Linking static target lib/librte_cfgfile.a 00:03:09.067 [216/738] Generating lib/rte_cfgfile_mingw with a custom command 00:03:09.067 [217/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:09.325 [218/738] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.325 [219/738] Generating lib/rte_compressdev_def with a custom command 00:03:09.325 [220/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:09.325 [221/738] Generating lib/rte_compressdev_mingw with a custom command 00:03:09.325 [222/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:03:09.325 [223/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:09.325 [224/738] Linking static target lib/librte_bpf.a 00:03:09.325 [225/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:09.325 [226/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:09.325 [227/738] Generating lib/rte_cryptodev_def with a custom command 00:03:09.325 [228/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:09.325 [229/738] Linking static target lib/librte_compressdev.a 00:03:09.584 [230/738] Generating lib/rte_cryptodev_mingw with a custom command 00:03:09.584 [231/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:03:09.584 [232/738] Linking static target lib/librte_acl.a 00:03:09.584 [233/738] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.584 [234/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:09.584 [235/738] Generating lib/rte_distributor_def with a custom command 00:03:09.584 [236/738] Generating lib/rte_distributor_mingw with a custom command 00:03:09.865 [237/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:09.865 [238/738] Generating lib/rte_efd_def with a custom command 00:03:09.865 [239/738] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.865 [240/738] Generating lib/rte_efd_mingw with a custom command 00:03:09.865 [241/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:09.865 [242/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:09.865 [243/738] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.865 [244/738] Linking target lib/librte_eal.so.23.0 00:03:10.205 [245/738] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:03:10.205 [246/738] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.205 [247/738] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:10.205 [248/738] Linking target lib/librte_ring.so.23.0 00:03:10.205 [249/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:10.205 [250/738] Linking target lib/librte_meter.so.23.0 00:03:10.205 [251/738] Linking target lib/librte_pci.so.23.0 00:03:10.205 [252/738] Linking target lib/librte_timer.so.23.0 00:03:10.205 [253/738] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:10.205 [254/738] Linking target lib/librte_acl.so.23.0 00:03:10.205 [255/738] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:03:10.205 [256/738] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:03:10.205 [257/738] Linking target lib/librte_rcu.so.23.0 00:03:10.205 [258/738] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:03:10.205 [259/738] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:03:10.205 [260/738] Linking target lib/librte_mempool.so.23.0 00:03:10.205 [261/738] Linking static target lib/librte_distributor.a 00:03:10.205 [262/738] Linking target lib/librte_cfgfile.so.23.0 00:03:10.205 [263/738] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:03:10.205 [264/738] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:03:10.205 [265/738] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:03:10.205 [266/738] Linking target lib/librte_mbuf.so.23.0 00:03:10.464 [267/738] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:03:10.464 [268/738] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.464 [269/738] Linking target lib/librte_net.so.23.0 00:03:10.464 [270/738] Linking target lib/librte_bbdev.so.23.0 00:03:10.464 [271/738] Linking target lib/librte_compressdev.so.23.0 00:03:10.464 [272/738] Linking target lib/librte_distributor.so.23.0 00:03:10.464 [273/738] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:03:10.464 [274/738] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:10.464 [275/738] Linking target lib/librte_cmdline.so.23.0 00:03:10.464 [276/738] Linking static target lib/librte_efd.a 00:03:10.464 [277/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:10.464 [278/738] Linking target lib/librte_hash.so.23.0 00:03:10.464 [279/738] Generating lib/rte_eventdev_def with a custom command 00:03:10.464 [280/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:10.723 [281/738] Generating lib/rte_eventdev_mingw with a custom command 00:03:10.723 [282/738] Generating lib/rte_gpudev_def with a custom command 00:03:10.723 [283/738] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:03:10.723 [284/738] Generating lib/rte_gpudev_mingw with a custom command 00:03:10.723 [285/738] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.723 [286/738] Linking target lib/librte_efd.so.23.0 00:03:10.723 [287/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:10.723 [288/738] Linking static target lib/librte_cryptodev.a 00:03:10.982 [289/738] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.982 [290/738] Linking target lib/librte_ethdev.so.23.0 00:03:10.982 [291/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:10.982 [292/738] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:03:10.982 [293/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:10.982 [294/738] Linking target lib/librte_metrics.so.23.0 00:03:10.982 [295/738] Linking target lib/librte_bpf.so.23.0 00:03:10.982 [296/738] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:10.982 [297/738] Linking static target lib/librte_gpudev.a 00:03:10.982 [298/738] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:03:11.241 [299/738] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:03:11.241 [300/738] Linking target lib/librte_bitratestats.so.23.0 00:03:11.241 [301/738] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:11.241 [302/738] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:11.241 [303/738] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:11.241 [304/738] Generating lib/rte_gro_def with a custom command 00:03:11.241 [305/738] Generating lib/rte_gro_mingw with a custom command 00:03:11.241 [306/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:11.500 [307/738] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:11.500 [308/738] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:11.500 [309/738] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:11.500 [310/738] Generating lib/rte_gso_def with a custom command 00:03:11.500 [311/738] Generating lib/rte_gso_mingw with a custom command 00:03:11.500 [312/738] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:11.500 [313/738] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:11.500 [314/738] Linking static target lib/librte_gro.a 00:03:11.500 [315/738] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:11.500 [316/738] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.500 [317/738] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:11.500 [318/738] Linking target lib/librte_gpudev.so.23.0 00:03:11.758 [319/738] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:11.758 [320/738] Linking static target lib/librte_gso.a 00:03:11.758 [321/738] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.758 [322/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:11.758 [323/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:11.758 [324/738] Linking static target lib/librte_eventdev.a 00:03:11.758 [325/738] Linking target lib/librte_gro.so.23.0 00:03:11.758 [326/738] Generating lib/rte_ip_frag_def with a custom command 00:03:11.758 [327/738] Generating lib/rte_ip_frag_mingw with a custom command 00:03:11.758 [328/738] Generating lib/rte_jobstats_def with a custom command 00:03:11.758 [329/738] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.758 [330/738] Generating lib/rte_jobstats_mingw with a custom command 00:03:11.758 [331/738] Linking target lib/librte_gso.so.23.0 00:03:11.758 [332/738] Generating lib/rte_latencystats_def with a custom command 00:03:11.758 [333/738] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:11.758 [334/738] Linking static target lib/librte_jobstats.a 00:03:11.758 [335/738] Generating lib/rte_latencystats_mingw with a custom command 00:03:11.758 [336/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:11.758 [337/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:12.017 [338/738] Generating lib/rte_lpm_def with a custom command 00:03:12.017 [339/738] Generating lib/rte_lpm_mingw with a custom command 00:03:12.017 [340/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:12.017 [341/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:12.017 [342/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:12.017 [343/738] Linking static target lib/librte_ip_frag.a 00:03:12.017 [344/738] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.017 [345/738] Linking target lib/librte_jobstats.so.23.0 00:03:12.275 [346/738] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:12.275 [347/738] Linking static target lib/librte_latencystats.a 00:03:12.275 [348/738] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.275 [349/738] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.275 [350/738] Linking target lib/librte_ip_frag.so.23.0 00:03:12.275 [351/738] Linking target lib/librte_cryptodev.so.23.0 00:03:12.275 [352/738] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:12.275 [353/738] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.276 [354/738] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:12.276 [355/738] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:03:12.276 [356/738] Generating lib/rte_member_def with a custom command 00:03:12.276 [357/738] Linking target lib/librte_latencystats.so.23.0 00:03:12.276 [358/738] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:03:12.276 [359/738] Generating lib/rte_member_mingw with a custom command 00:03:12.276 [360/738] Generating lib/rte_pcapng_def with a custom command 00:03:12.276 [361/738] Generating lib/rte_pcapng_mingw with a custom command 00:03:12.534 [362/738] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:12.534 [363/738] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:12.534 [364/738] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:12.534 [365/738] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:12.534 [366/738] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:03:12.792 [367/738] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:12.792 [368/738] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:12.792 [369/738] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:12.792 [370/738] Linking static target lib/librte_lpm.a 00:03:12.792 [371/738] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:12.792 [372/738] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:03:12.792 [373/738] Generating lib/rte_power_def with a custom command 00:03:12.792 [374/738] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:12.792 [375/738] Generating lib/rte_power_mingw with a custom command 00:03:12.792 [376/738] Linking static target lib/librte_pcapng.a 00:03:12.792 [377/738] Generating lib/rte_rawdev_def with a custom command 00:03:12.792 [378/738] Generating lib/rte_rawdev_mingw with a custom command 00:03:12.792 [379/738] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:13.050 [380/738] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:13.050 [381/738] Generating lib/rte_regexdev_def with a custom command 00:03:13.050 [382/738] Generating lib/rte_regexdev_mingw with a custom command 00:03:13.050 [383/738] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:03:13.050 [384/738] Generating lib/rte_dmadev_def with a custom command 00:03:13.050 [385/738] Generating lib/rte_dmadev_mingw with a custom command 00:03:13.050 [386/738] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.050 [387/738] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.050 [388/738] Linking target lib/librte_pcapng.so.23.0 00:03:13.050 [389/738] Linking target lib/librte_lpm.so.23.0 00:03:13.050 [390/738] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:13.050 [391/738] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.050 [392/738] Linking static target lib/librte_power.a 00:03:13.050 [393/738] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:03:13.050 [394/738] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:03:13.050 [395/738] Generating lib/rte_rib_def with a custom command 00:03:13.050 [396/738] Linking target lib/librte_eventdev.so.23.0 00:03:13.050 [397/738] Generating lib/rte_rib_mingw with a custom command 00:03:13.050 [398/738] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:13.050 [399/738] Linking static target lib/librte_rawdev.a 00:03:13.050 [400/738] Generating lib/rte_reorder_def with a custom command 00:03:13.050 [401/738] Generating lib/rte_reorder_mingw with a custom command 00:03:13.308 [402/738] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:03:13.308 [403/738] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:13.308 [404/738] Linking static target lib/librte_regexdev.a 00:03:13.308 [405/738] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:13.308 [406/738] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:13.308 [407/738] Linking static target lib/librte_dmadev.a 00:03:13.308 [408/738] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:13.308 [409/738] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:13.308 [410/738] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.567 [411/738] Generating lib/rte_sched_def with a custom command 00:03:13.567 [412/738] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:13.567 [413/738] Generating lib/rte_sched_mingw with a custom command 00:03:13.567 [414/738] Linking target lib/librte_rawdev.so.23.0 00:03:13.567 [415/738] Generating lib/rte_security_def with a custom command 00:03:13.567 [416/738] Generating lib/rte_security_mingw with a custom command 00:03:13.567 [417/738] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:13.567 [418/738] Linking static target lib/librte_reorder.a 00:03:13.567 [419/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:13.567 [420/738] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:13.567 [421/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:13.567 [422/738] Linking static target lib/librte_rib.a 00:03:13.567 [423/738] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:13.567 [424/738] Generating lib/rte_stack_def with a custom command 00:03:13.567 [425/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:13.567 [426/738] Linking static target lib/librte_member.a 00:03:13.567 [427/738] Linking static target lib/librte_stack.a 00:03:13.567 [428/738] Generating lib/rte_stack_mingw with a custom command 00:03:13.567 [429/738] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.567 [430/738] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.567 [431/738] Linking target lib/librte_dmadev.so.23.0 00:03:13.826 [432/738] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.826 [433/738] Linking target lib/librte_reorder.so.23.0 00:03:13.826 [434/738] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.826 [435/738] Linking target lib/librte_power.so.23.0 00:03:13.826 [436/738] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:13.826 [437/738] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.826 [438/738] Linking target lib/librte_regexdev.so.23.0 00:03:13.826 [439/738] Linking target lib/librte_stack.so.23.0 00:03:13.826 [440/738] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:03:13.826 [441/738] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.826 [442/738] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:13.826 [443/738] Linking target lib/librte_member.so.23.0 00:03:13.826 [444/738] Linking static target lib/librte_security.a 00:03:13.826 [445/738] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.826 [446/738] Linking target lib/librte_rib.so.23.0 00:03:14.084 [447/738] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:03:14.084 [448/738] Generating lib/rte_vhost_def with a custom command 00:03:14.084 [449/738] Generating lib/rte_vhost_mingw with a custom command 00:03:14.084 [450/738] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:14.084 [451/738] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:14.084 [452/738] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.343 [453/738] Linking target lib/librte_security.so.23.0 00:03:14.343 [454/738] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:14.343 [455/738] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:03:14.343 [456/738] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:14.601 [457/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:14.601 [458/738] Generating lib/rte_ipsec_def with a custom command 00:03:14.601 [459/738] Generating lib/rte_ipsec_mingw with a custom command 00:03:14.601 [460/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:14.601 [461/738] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:14.601 [462/738] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:14.601 [463/738] Linking static target lib/librte_sched.a 00:03:14.601 [464/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:14.859 [465/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:14.859 [466/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:14.859 [467/738] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:14.859 [468/738] Generating lib/rte_fib_def with a custom command 00:03:14.859 [469/738] Generating lib/rte_fib_mingw with a custom command 00:03:14.859 [470/738] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.859 [471/738] Linking target lib/librte_sched.so.23.0 00:03:15.118 [472/738] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:03:15.118 [473/738] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:03:15.118 [474/738] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:03:15.118 [475/738] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:15.118 [476/738] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:15.375 [477/738] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:15.375 [478/738] Linking static target lib/librte_fib.a 00:03:15.375 [479/738] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:15.375 [480/738] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:15.375 [481/738] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:15.375 [482/738] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:15.375 [483/738] Linking static target lib/librte_ipsec.a 00:03:15.633 [484/738] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:15.633 [485/738] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.633 [486/738] Linking target lib/librte_fib.so.23.0 00:03:15.633 [487/738] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:15.634 [488/738] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.634 [489/738] Linking target lib/librte_ipsec.so.23.0 00:03:15.891 [490/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:15.891 [491/738] Generating lib/rte_port_def with a custom command 00:03:15.891 [492/738] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:15.891 [493/738] Generating lib/rte_port_mingw with a custom command 00:03:15.891 [494/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:16.150 [495/738] Generating lib/rte_pdump_def with a custom command 00:03:16.150 [496/738] Generating lib/rte_pdump_mingw with a custom command 00:03:16.150 [497/738] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:16.150 [498/738] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:16.150 [499/738] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:16.150 [500/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:16.150 [501/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:16.150 [502/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:16.150 [503/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:16.407 [504/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:16.407 [505/738] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:16.665 [506/738] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:16.665 [507/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:16.665 [508/738] Linking static target lib/librte_port.a 00:03:16.665 [509/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:16.665 [510/738] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:16.665 [511/738] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:16.665 [512/738] Linking static target lib/librte_pdump.a 00:03:16.922 [513/738] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.922 [514/738] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.922 [515/738] Linking target lib/librte_pdump.so.23.0 00:03:16.922 [516/738] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:16.922 [517/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:16.922 [518/738] Linking target lib/librte_port.so.23.0 00:03:16.922 [519/738] Generating lib/rte_table_def with a custom command 00:03:16.923 [520/738] Generating lib/rte_table_mingw with a custom command 00:03:16.923 [521/738] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:03:17.180 [522/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:17.180 [523/738] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:17.180 [524/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:17.180 [525/738] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:17.437 [526/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:17.437 [527/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:17.437 [528/738] Generating lib/rte_pipeline_def with a custom command 00:03:17.437 [529/738] Generating lib/rte_pipeline_mingw with a custom command 00:03:17.437 [530/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:17.437 [531/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:17.437 [532/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:17.437 [533/738] Linking static target lib/librte_table.a 00:03:17.749 [534/738] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:17.749 [535/738] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:17.749 [536/738] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:17.749 [537/738] Generating lib/rte_graph_def with a custom command 00:03:17.749 [538/738] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:17.749 [539/738] Generating lib/rte_graph_mingw with a custom command 00:03:18.008 [540/738] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.008 [541/738] Linking target lib/librte_table.so.23.0 00:03:18.008 [542/738] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:18.008 [543/738] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:18.008 [544/738] Linking static target lib/librte_graph.a 00:03:18.008 [545/738] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:03:18.008 [546/738] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:18.008 [547/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:18.266 [548/738] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:18.266 [549/738] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:18.266 [550/738] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:18.266 [551/738] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:18.266 [552/738] Generating lib/rte_node_def with a custom command 00:03:18.266 [553/738] Generating lib/rte_node_mingw with a custom command 00:03:18.524 [554/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:18.524 [555/738] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.524 [556/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:18.524 [557/738] Linking target lib/librte_graph.so.23.0 00:03:18.524 [558/738] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:18.524 [559/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:18.524 [560/738] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:03:18.524 [561/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:18.524 [562/738] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:18.524 [563/738] Generating drivers/rte_bus_pci_def with a custom command 00:03:18.524 [564/738] Generating drivers/rte_bus_pci_mingw with a custom command 00:03:18.782 [565/738] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:18.782 [566/738] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:18.782 [567/738] Generating drivers/rte_bus_vdev_def with a custom command 00:03:18.782 [568/738] Generating drivers/rte_bus_vdev_mingw with a custom command 00:03:18.782 [569/738] Generating drivers/rte_mempool_ring_def with a custom command 00:03:18.782 [570/738] Generating drivers/rte_mempool_ring_mingw with a custom command 00:03:18.782 [571/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:18.782 [572/738] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:18.782 [573/738] Linking static target lib/librte_node.a 00:03:18.782 [574/738] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:18.782 [575/738] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:18.782 [576/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:18.782 [577/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:18.782 [578/738] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:19.041 [579/738] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.041 [580/738] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:19.041 [581/738] Linking target lib/librte_node.so.23.0 00:03:19.041 [582/738] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:19.041 [583/738] Linking static target drivers/librte_bus_vdev.a 00:03:19.041 [584/738] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:19.041 [585/738] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:19.041 [586/738] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:19.041 [587/738] Linking static target drivers/librte_bus_pci.a 00:03:19.041 [588/738] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.041 [589/738] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:19.299 [590/738] Linking target drivers/librte_bus_vdev.so.23.0 00:03:19.299 [591/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:19.299 [592/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:19.299 [593/738] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:03:19.299 [594/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:19.299 [595/738] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.299 [596/738] Linking target drivers/librte_bus_pci.so.23.0 00:03:19.299 [597/738] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:03:19.299 [598/738] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:19.557 [599/738] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:19.557 [600/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:19.557 [601/738] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:19.558 [602/738] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:19.558 [603/738] Linking static target drivers/librte_mempool_ring.a 00:03:19.558 [604/738] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:19.558 [605/738] Linking target drivers/librte_mempool_ring.so.23.0 00:03:19.815 [606/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:20.074 [607/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:20.331 [608/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:20.331 [609/738] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:20.331 [610/738] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:20.590 [611/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:20.590 [612/738] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:20.590 [613/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:20.590 [614/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:20.848 [615/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:21.107 [616/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:21.107 [617/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:21.107 [618/738] Generating drivers/rte_net_i40e_def with a custom command 00:03:21.107 [619/738] Generating drivers/rte_net_i40e_mingw with a custom command 00:03:21.365 [620/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:21.624 [621/738] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:21.624 [622/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:21.624 [623/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:21.882 [624/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:21.882 [625/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:21.882 [626/738] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:21.882 [627/738] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:21.882 [628/738] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:21.882 [629/738] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:22.141 [630/738] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:22.141 [631/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:03:22.399 [632/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:22.399 [633/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:22.399 [634/738] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:22.399 [635/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:22.399 [636/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:22.658 [637/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:22.658 [638/738] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:22.658 [639/738] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:22.658 [640/738] Linking static target drivers/librte_net_i40e.a 00:03:22.658 [641/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:22.658 [642/738] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:22.658 [643/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:22.658 [644/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:22.916 [645/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:22.916 [646/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:23.174 [647/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:23.174 [648/738] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.174 [649/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:23.174 [650/738] Linking target drivers/librte_net_i40e.so.23.0 00:03:23.434 [651/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:23.434 [652/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:23.434 [653/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:23.434 [654/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:23.434 [655/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:23.434 [656/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:23.434 [657/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:23.692 [658/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:23.692 [659/738] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:23.692 [660/738] Linking static target lib/librte_vhost.a 00:03:23.692 [661/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:23.692 [662/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:23.692 [663/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:23.951 [664/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:23.951 [665/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:23.951 [666/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:24.209 [667/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:24.209 [668/738] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.209 [669/738] Linking target lib/librte_vhost.so.23.0 00:03:24.467 [670/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:24.467 [671/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:24.467 [672/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:24.467 [673/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:24.726 [674/738] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:24.726 [675/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:24.726 [676/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:24.726 [677/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:24.726 [678/738] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:24.985 [679/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:24.985 [680/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:24.985 [681/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:24.985 [682/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:24.985 [683/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:24.985 [684/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:24.985 [685/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:25.243 [686/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:25.244 [687/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:25.244 [688/738] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:25.502 [689/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:25.502 [690/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:25.502 [691/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:25.502 [692/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:25.761 [693/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:25.761 [694/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:25.761 [695/738] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:26.019 [696/738] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:26.019 [697/738] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:26.019 [698/738] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:26.019 [699/738] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:26.277 [700/738] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:26.277 [701/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:26.277 [702/738] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:26.535 [703/738] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:26.535 [704/738] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:26.535 [705/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:26.535 [706/738] Linking static target lib/librte_pipeline.a 00:03:26.793 [707/738] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:26.793 [708/738] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:26.793 [709/738] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:27.051 [710/738] Linking target app/dpdk-dumpcap 00:03:27.052 [711/738] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:27.052 [712/738] Linking target app/dpdk-pdump 00:03:27.052 [713/738] Linking target app/dpdk-proc-info 00:03:27.052 [714/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:27.311 [715/738] Linking target app/dpdk-test-acl 00:03:27.311 [716/738] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:27.311 [717/738] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:27.311 [718/738] Linking target app/dpdk-test-bbdev 00:03:27.311 [719/738] Linking target app/dpdk-test-cmdline 00:03:27.311 [720/738] Linking target app/dpdk-test-crypto-perf 00:03:27.311 [721/738] Linking target app/dpdk-test-compress-perf 00:03:27.569 [722/738] Linking target app/dpdk-test-eventdev 00:03:27.569 [723/738] Linking target app/dpdk-test-fib 00:03:27.569 [724/738] Linking target app/dpdk-test-flow-perf 00:03:27.569 [725/738] Linking target app/dpdk-test-gpudev 00:03:27.569 [726/738] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:27.569 [727/738] Linking target app/dpdk-test-pipeline 00:03:27.569 [728/738] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:27.837 [729/738] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:27.837 [730/738] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:27.837 [731/738] Linking target app/dpdk-testpmd 00:03:28.110 [732/738] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:28.110 [733/738] Linking target app/dpdk-test-sad 00:03:28.110 [734/738] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:28.368 [735/738] Linking target app/dpdk-test-regex 00:03:28.368 [736/738] Linking target app/dpdk-test-security-perf 00:03:29.307 [737/738] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.307 [738/738] Linking target lib/librte_pipeline.so.23.0 00:03:29.307 21:35:52 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:03:29.307 21:35:52 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:29.307 21:35:52 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:29.568 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:29.568 [0/1] Installing files. 00:03:29.829 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.829 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.830 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:29.831 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:29.832 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:29.833 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:29.833 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.833 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.833 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.833 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.833 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.833 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.833 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.833 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.833 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.833 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.833 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.833 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.833 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.833 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.833 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.094 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.094 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.094 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.094 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.094 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.094 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.094 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.094 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.094 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.094 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.094 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:30.095 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:30.095 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:30.095 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.095 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:30.095 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:30.095 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:30.095 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:30.095 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:30.095 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:30.095 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:30.095 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:30.095 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:30.095 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:30.095 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:30.095 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:30.095 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:30.095 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:30.095 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:30.095 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:30.095 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:30.095 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:30.095 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.095 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.095 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.095 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:30.095 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:30.095 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:30.095 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:30.095 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:30.095 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:30.095 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:30.095 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:30.095 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:30.095 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:30.095 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:30.095 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:30.095 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.095 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.095 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.095 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.095 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.095 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.095 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.095 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.096 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.097 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:30.098 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:30.098 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:30.098 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:30.098 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:30.098 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:30.098 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:30.098 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:30.098 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:30.098 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:30.098 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:30.098 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:30.098 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:30.098 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:30.098 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:30.098 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:30.098 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:30.098 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:30.098 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:30.098 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:30.098 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:30.098 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:30.098 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:30.098 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:30.098 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:30.098 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:30.098 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:30.098 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:30.098 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:30.098 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:30.098 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:30.098 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:30.098 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:30.098 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:30.098 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:30.098 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:30.098 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:30.098 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:30.099 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:30.099 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:30.099 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:30.099 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:30.099 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:30.099 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:30.099 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:30.099 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:30.099 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:30.099 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:30.099 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:30.099 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:30.099 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:30.099 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:30.099 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:30.099 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:30.099 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:30.099 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:30.099 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:30.099 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:30.099 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:30.099 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:30.099 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:30.099 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:30.099 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:30.099 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:30.099 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:30.099 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:30.099 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:30.099 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:30.099 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:30.099 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:30.099 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:30.099 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:30.099 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:30.099 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:30.099 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:30.099 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:30.099 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:30.099 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:30.099 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:30.099 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:30.099 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:30.099 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:30.099 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:30.099 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:30.099 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:30.099 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:30.099 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:30.099 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:30.099 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:30.099 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:30.099 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:30.099 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:30.099 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:30.099 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:30.099 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:30.099 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:30.099 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:30.099 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:30.099 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:30.099 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:30.099 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:30.099 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:30.099 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:30.099 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:30.099 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:30.099 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:30.099 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:30.099 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:30.099 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:30.099 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:30.099 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:30.099 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:30.099 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:30.099 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:30.099 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:30.099 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:30.099 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:30.099 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:30.099 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:30.099 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:30.099 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:30.099 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:30.099 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:30.099 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:30.099 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:30.099 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:30.099 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:30.099 21:35:53 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:03:30.099 ************************************ 00:03:30.099 END TEST build_native_dpdk 00:03:30.100 ************************************ 00:03:30.100 21:35:53 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:30.100 00:03:30.100 real 0m32.174s 00:03:30.100 user 3m32.259s 00:03:30.100 sys 0m31.706s 00:03:30.100 21:35:53 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:30.100 21:35:53 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:30.359 21:35:53 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:30.359 21:35:53 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:30.359 21:35:53 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:30.359 21:35:53 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:30.359 21:35:53 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:30.359 21:35:53 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:30.359 21:35:53 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:30.359 21:35:53 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:30.359 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:30.359 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:30.359 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:30.359 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:30.926 Using 'verbs' RDMA provider 00:03:41.867 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:51.853 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:52.110 Creating mk/config.mk...done. 00:03:52.110 Creating mk/cc.flags.mk...done. 00:03:52.110 Type 'make' to build. 00:03:52.110 21:36:15 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:52.110 21:36:15 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:52.110 21:36:15 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:52.110 21:36:15 -- common/autotest_common.sh@10 -- $ set +x 00:03:52.110 ************************************ 00:03:52.110 START TEST make 00:03:52.110 ************************************ 00:03:52.110 21:36:15 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:52.368 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:52.368 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:52.368 meson setup builddir \ 00:03:52.368 -Dwith-libaio=enabled \ 00:03:52.368 -Dwith-liburing=enabled \ 00:03:52.368 -Dwith-libvfn=disabled \ 00:03:52.368 -Dwith-spdk=disabled \ 00:03:52.368 -Dexamples=false \ 00:03:52.368 -Dtests=false \ 00:03:52.368 -Dtools=false && \ 00:03:52.368 meson compile -C builddir && \ 00:03:52.368 cd -) 00:03:52.368 make[1]: Nothing to be done for 'all'. 00:03:54.268 The Meson build system 00:03:54.268 Version: 1.5.0 00:03:54.268 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:54.268 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:54.268 Build type: native build 00:03:54.268 Project name: xnvme 00:03:54.268 Project version: 0.7.5 00:03:54.268 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:54.268 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:54.268 Host machine cpu family: x86_64 00:03:54.268 Host machine cpu: x86_64 00:03:54.268 Message: host_machine.system: linux 00:03:54.268 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:54.268 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:54.268 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:54.268 Run-time dependency threads found: YES 00:03:54.268 Has header "setupapi.h" : NO 00:03:54.268 Has header "linux/blkzoned.h" : YES 00:03:54.268 Has header "linux/blkzoned.h" : YES (cached) 00:03:54.268 Has header "libaio.h" : YES 00:03:54.268 Library aio found: YES 00:03:54.268 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:54.268 Run-time dependency liburing found: YES 2.2 00:03:54.268 Dependency libvfn skipped: feature with-libvfn disabled 00:03:54.268 Found CMake: /usr/bin/cmake (3.27.7) 00:03:54.268 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:03:54.268 Subproject spdk : skipped: feature with-spdk disabled 00:03:54.268 Run-time dependency appleframeworks found: NO (tried framework) 00:03:54.268 Run-time dependency appleframeworks found: NO (tried framework) 00:03:54.268 Library rt found: YES 00:03:54.268 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:54.268 Configuring xnvme_config.h using configuration 00:03:54.268 Configuring xnvme.spec using configuration 00:03:54.268 Run-time dependency bash-completion found: YES 2.11 00:03:54.268 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:54.268 Program cp found: YES (/usr/bin/cp) 00:03:54.268 Build targets in project: 3 00:03:54.268 00:03:54.268 xnvme 0.7.5 00:03:54.268 00:03:54.268 Subprojects 00:03:54.268 spdk : NO Feature 'with-spdk' disabled 00:03:54.268 00:03:54.268 User defined options 00:03:54.268 examples : false 00:03:54.268 tests : false 00:03:54.268 tools : false 00:03:54.268 with-libaio : enabled 00:03:54.268 with-liburing: enabled 00:03:54.268 with-libvfn : disabled 00:03:54.268 with-spdk : disabled 00:03:54.268 00:03:54.268 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:54.526 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:54.526 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:03:54.526 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:03:54.526 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:03:54.526 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:03:54.526 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:03:54.526 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:03:54.526 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:03:54.526 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:03:54.526 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:03:54.784 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:03:54.784 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:03:54.784 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:03:54.784 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:03:54.784 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:03:54.784 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:03:54.784 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:03:54.784 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:03:54.784 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:03:54.784 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:03:54.784 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:03:54.784 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:03:54.784 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:03:54.784 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:03:54.784 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:03:54.784 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:03:54.784 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:03:54.784 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:03:54.784 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:03:54.784 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:03:54.784 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:03:54.784 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:03:54.784 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:03:54.784 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:03:54.784 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:03:54.784 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:03:54.784 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:03:54.784 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:03:54.784 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:03:54.784 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:03:54.784 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:03:54.784 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:03:54.784 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:03:54.784 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:03:54.784 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:03:55.042 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:03:55.042 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:03:55.042 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:03:55.042 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:03:55.042 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:03:55.042 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:03:55.042 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:03:55.042 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:03:55.042 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:03:55.042 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:03:55.042 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:03:55.042 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:03:55.042 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:03:55.042 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:03:55.042 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:03:55.042 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:03:55.042 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:03:55.042 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:03:55.042 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:03:55.042 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:03:55.042 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:03:55.042 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:03:55.042 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:03:55.300 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:03:55.300 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:03:55.300 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:03:55.300 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:03:55.300 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:03:55.300 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:03:55.559 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:03:55.559 [75/76] Linking static target lib/libxnvme.a 00:03:55.559 [76/76] Linking target lib/libxnvme.so.0.7.5 00:03:55.559 INFO: autodetecting backend as ninja 00:03:55.559 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:55.559 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:27.622 CC lib/log/log.o 00:04:27.622 CC lib/log/log_flags.o 00:04:27.622 CC lib/ut/ut.o 00:04:27.622 CC lib/log/log_deprecated.o 00:04:27.622 CC lib/ut_mock/mock.o 00:04:27.622 LIB libspdk_log.a 00:04:27.622 LIB libspdk_ut.a 00:04:27.622 LIB libspdk_ut_mock.a 00:04:27.622 SO libspdk_log.so.7.1 00:04:27.622 SO libspdk_ut.so.2.0 00:04:27.622 SO libspdk_ut_mock.so.6.0 00:04:27.622 SYMLINK libspdk_log.so 00:04:27.622 SYMLINK libspdk_ut_mock.so 00:04:27.622 SYMLINK libspdk_ut.so 00:04:27.622 CC lib/ioat/ioat.o 00:04:27.622 CXX lib/trace_parser/trace.o 00:04:27.622 CC lib/util/base64.o 00:04:27.622 CC lib/util/bit_array.o 00:04:27.622 CC lib/dma/dma.o 00:04:27.622 CC lib/util/cpuset.o 00:04:27.622 CC lib/util/crc16.o 00:04:27.622 CC lib/util/crc32.o 00:04:27.622 CC lib/util/crc32c.o 00:04:27.622 CC lib/vfio_user/host/vfio_user_pci.o 00:04:27.622 CC lib/util/crc32_ieee.o 00:04:27.622 CC lib/util/crc64.o 00:04:27.622 CC lib/util/dif.o 00:04:27.622 CC lib/util/fd.o 00:04:27.622 CC lib/util/fd_group.o 00:04:27.622 LIB libspdk_dma.a 00:04:27.622 SO libspdk_dma.so.5.0 00:04:27.622 CC lib/util/file.o 00:04:27.622 CC lib/util/hexlify.o 00:04:27.622 CC lib/util/iov.o 00:04:27.622 CC lib/util/math.o 00:04:27.622 SYMLINK libspdk_dma.so 00:04:27.622 CC lib/util/net.o 00:04:27.622 CC lib/vfio_user/host/vfio_user.o 00:04:27.622 LIB libspdk_ioat.a 00:04:27.622 SO libspdk_ioat.so.7.0 00:04:27.622 SYMLINK libspdk_ioat.so 00:04:27.622 CC lib/util/pipe.o 00:04:27.622 CC lib/util/strerror_tls.o 00:04:27.622 CC lib/util/string.o 00:04:27.622 CC lib/util/uuid.o 00:04:27.622 CC lib/util/xor.o 00:04:27.622 LIB libspdk_vfio_user.a 00:04:27.622 CC lib/util/zipf.o 00:04:27.622 CC lib/util/md5.o 00:04:27.622 SO libspdk_vfio_user.so.5.0 00:04:27.622 SYMLINK libspdk_vfio_user.so 00:04:27.622 LIB libspdk_util.a 00:04:27.622 LIB libspdk_trace_parser.a 00:04:27.622 SO libspdk_util.so.10.1 00:04:27.622 SO libspdk_trace_parser.so.6.0 00:04:27.622 SYMLINK libspdk_util.so 00:04:27.622 SYMLINK libspdk_trace_parser.so 00:04:27.622 CC lib/env_dpdk/env.o 00:04:27.622 CC lib/env_dpdk/memory.o 00:04:27.622 CC lib/env_dpdk/pci.o 00:04:27.622 CC lib/env_dpdk/init.o 00:04:27.622 CC lib/vmd/vmd.o 00:04:27.622 CC lib/env_dpdk/threads.o 00:04:27.622 CC lib/conf/conf.o 00:04:27.622 CC lib/idxd/idxd.o 00:04:27.622 CC lib/json/json_parse.o 00:04:27.622 CC lib/rdma_utils/rdma_utils.o 00:04:27.622 CC lib/env_dpdk/pci_ioat.o 00:04:27.622 LIB libspdk_conf.a 00:04:27.622 SO libspdk_conf.so.6.0 00:04:27.622 CC lib/json/json_util.o 00:04:27.622 CC lib/env_dpdk/pci_virtio.o 00:04:27.622 LIB libspdk_rdma_utils.a 00:04:27.622 SYMLINK libspdk_conf.so 00:04:27.622 CC lib/env_dpdk/pci_vmd.o 00:04:27.622 SO libspdk_rdma_utils.so.1.0 00:04:27.622 CC lib/env_dpdk/pci_idxd.o 00:04:27.622 CC lib/idxd/idxd_user.o 00:04:27.622 CC lib/idxd/idxd_kernel.o 00:04:27.622 CC lib/vmd/led.o 00:04:27.622 SYMLINK libspdk_rdma_utils.so 00:04:27.622 CC lib/env_dpdk/pci_event.o 00:04:27.622 CC lib/env_dpdk/sigbus_handler.o 00:04:27.622 CC lib/json/json_write.o 00:04:27.622 CC lib/env_dpdk/pci_dpdk.o 00:04:27.622 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:27.622 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:27.622 CC lib/rdma_provider/common.o 00:04:27.622 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:27.622 LIB libspdk_idxd.a 00:04:27.622 SO libspdk_idxd.so.12.1 00:04:27.622 LIB libspdk_json.a 00:04:27.622 LIB libspdk_vmd.a 00:04:27.622 SO libspdk_json.so.6.0 00:04:27.622 SO libspdk_vmd.so.6.0 00:04:27.622 SYMLINK libspdk_idxd.so 00:04:27.622 SYMLINK libspdk_vmd.so 00:04:27.622 SYMLINK libspdk_json.so 00:04:27.622 LIB libspdk_rdma_provider.a 00:04:27.622 SO libspdk_rdma_provider.so.7.0 00:04:27.622 SYMLINK libspdk_rdma_provider.so 00:04:27.622 CC lib/jsonrpc/jsonrpc_server.o 00:04:27.622 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:27.622 CC lib/jsonrpc/jsonrpc_client.o 00:04:27.622 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:27.622 LIB libspdk_jsonrpc.a 00:04:27.622 SO libspdk_jsonrpc.so.6.0 00:04:27.622 SYMLINK libspdk_jsonrpc.so 00:04:27.622 LIB libspdk_env_dpdk.a 00:04:27.622 SO libspdk_env_dpdk.so.15.1 00:04:27.622 CC lib/rpc/rpc.o 00:04:27.622 SYMLINK libspdk_env_dpdk.so 00:04:27.622 LIB libspdk_rpc.a 00:04:27.622 SO libspdk_rpc.so.6.0 00:04:27.622 SYMLINK libspdk_rpc.so 00:04:27.883 CC lib/notify/notify.o 00:04:27.883 CC lib/notify/notify_rpc.o 00:04:27.883 CC lib/trace/trace_flags.o 00:04:27.883 CC lib/trace/trace.o 00:04:27.883 CC lib/trace/trace_rpc.o 00:04:27.883 CC lib/keyring/keyring.o 00:04:27.883 CC lib/keyring/keyring_rpc.o 00:04:27.883 LIB libspdk_notify.a 00:04:27.883 SO libspdk_notify.so.6.0 00:04:28.144 LIB libspdk_keyring.a 00:04:28.144 SO libspdk_keyring.so.2.0 00:04:28.144 SYMLINK libspdk_notify.so 00:04:28.144 LIB libspdk_trace.a 00:04:28.144 SYMLINK libspdk_keyring.so 00:04:28.144 SO libspdk_trace.so.11.0 00:04:28.144 SYMLINK libspdk_trace.so 00:04:28.405 CC lib/thread/iobuf.o 00:04:28.405 CC lib/thread/thread.o 00:04:28.405 CC lib/sock/sock.o 00:04:28.405 CC lib/sock/sock_rpc.o 00:04:28.666 LIB libspdk_sock.a 00:04:28.927 SO libspdk_sock.so.10.0 00:04:28.927 SYMLINK libspdk_sock.so 00:04:29.188 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:29.188 CC lib/nvme/nvme_ctrlr.o 00:04:29.188 CC lib/nvme/nvme_pcie_common.o 00:04:29.188 CC lib/nvme/nvme_fabric.o 00:04:29.188 CC lib/nvme/nvme_ns_cmd.o 00:04:29.188 CC lib/nvme/nvme_pcie.o 00:04:29.188 CC lib/nvme/nvme_ns.o 00:04:29.188 CC lib/nvme/nvme.o 00:04:29.188 CC lib/nvme/nvme_qpair.o 00:04:29.761 CC lib/nvme/nvme_quirks.o 00:04:29.761 CC lib/nvme/nvme_transport.o 00:04:29.761 CC lib/nvme/nvme_discovery.o 00:04:29.761 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:29.761 LIB libspdk_thread.a 00:04:29.761 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:30.022 CC lib/nvme/nvme_tcp.o 00:04:30.022 SO libspdk_thread.so.11.0 00:04:30.022 CC lib/nvme/nvme_opal.o 00:04:30.022 SYMLINK libspdk_thread.so 00:04:30.022 CC lib/nvme/nvme_io_msg.o 00:04:30.022 CC lib/nvme/nvme_poll_group.o 00:04:30.282 CC lib/nvme/nvme_zns.o 00:04:30.282 CC lib/nvme/nvme_stubs.o 00:04:30.282 CC lib/nvme/nvme_auth.o 00:04:30.282 CC lib/nvme/nvme_cuse.o 00:04:30.543 CC lib/nvme/nvme_rdma.o 00:04:30.543 CC lib/accel/accel.o 00:04:30.543 CC lib/blob/blobstore.o 00:04:30.543 CC lib/blob/request.o 00:04:30.543 CC lib/accel/accel_rpc.o 00:04:30.804 CC lib/accel/accel_sw.o 00:04:31.064 CC lib/init/json_config.o 00:04:31.064 CC lib/init/subsystem.o 00:04:31.064 CC lib/init/subsystem_rpc.o 00:04:31.064 CC lib/blob/zeroes.o 00:04:31.064 CC lib/init/rpc.o 00:04:31.064 CC lib/blob/blob_bs_dev.o 00:04:31.322 LIB libspdk_init.a 00:04:31.322 SO libspdk_init.so.6.0 00:04:31.322 CC lib/virtio/virtio.o 00:04:31.322 CC lib/virtio/virtio_vhost_user.o 00:04:31.322 CC lib/virtio/virtio_vfio_user.o 00:04:31.322 CC lib/fsdev/fsdev.o 00:04:31.322 CC lib/virtio/virtio_pci.o 00:04:31.322 SYMLINK libspdk_init.so 00:04:31.322 CC lib/fsdev/fsdev_io.o 00:04:31.583 CC lib/event/app.o 00:04:31.583 CC lib/event/reactor.o 00:04:31.583 CC lib/event/log_rpc.o 00:04:31.583 CC lib/fsdev/fsdev_rpc.o 00:04:31.583 LIB libspdk_virtio.a 00:04:31.583 SO libspdk_virtio.so.7.0 00:04:31.844 LIB libspdk_accel.a 00:04:31.844 LIB libspdk_nvme.a 00:04:31.844 CC lib/event/app_rpc.o 00:04:31.844 CC lib/event/scheduler_static.o 00:04:31.844 SO libspdk_accel.so.16.0 00:04:31.844 SYMLINK libspdk_virtio.so 00:04:31.844 SYMLINK libspdk_accel.so 00:04:31.844 LIB libspdk_fsdev.a 00:04:31.844 SO libspdk_nvme.so.15.0 00:04:31.844 SO libspdk_fsdev.so.2.0 00:04:32.104 SYMLINK libspdk_fsdev.so 00:04:32.104 CC lib/bdev/bdev_rpc.o 00:04:32.104 CC lib/bdev/bdev.o 00:04:32.104 CC lib/bdev/bdev_zone.o 00:04:32.104 LIB libspdk_event.a 00:04:32.104 CC lib/bdev/part.o 00:04:32.105 CC lib/bdev/scsi_nvme.o 00:04:32.105 SO libspdk_event.so.14.0 00:04:32.105 SYMLINK libspdk_event.so 00:04:32.105 SYMLINK libspdk_nvme.so 00:04:32.105 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:33.046 LIB libspdk_fuse_dispatcher.a 00:04:33.046 SO libspdk_fuse_dispatcher.so.1.0 00:04:33.046 SYMLINK libspdk_fuse_dispatcher.so 00:04:33.987 LIB libspdk_blob.a 00:04:33.987 SO libspdk_blob.so.12.0 00:04:33.987 SYMLINK libspdk_blob.so 00:04:34.245 CC lib/lvol/lvol.o 00:04:34.245 CC lib/blobfs/blobfs.o 00:04:34.245 CC lib/blobfs/tree.o 00:04:34.505 LIB libspdk_bdev.a 00:04:34.764 SO libspdk_bdev.so.17.0 00:04:34.764 SYMLINK libspdk_bdev.so 00:04:35.022 CC lib/ftl/ftl_core.o 00:04:35.022 CC lib/ftl/ftl_init.o 00:04:35.022 CC lib/ftl/ftl_layout.o 00:04:35.022 CC lib/ftl/ftl_debug.o 00:04:35.022 CC lib/nbd/nbd.o 00:04:35.022 CC lib/nvmf/ctrlr.o 00:04:35.022 CC lib/scsi/dev.o 00:04:35.022 CC lib/ublk/ublk.o 00:04:35.022 LIB libspdk_blobfs.a 00:04:35.022 SO libspdk_blobfs.so.11.0 00:04:35.022 LIB libspdk_lvol.a 00:04:35.022 SYMLINK libspdk_blobfs.so 00:04:35.022 CC lib/ftl/ftl_io.o 00:04:35.022 CC lib/ftl/ftl_sb.o 00:04:35.022 SO libspdk_lvol.so.11.0 00:04:35.022 CC lib/scsi/lun.o 00:04:35.281 CC lib/ftl/ftl_l2p.o 00:04:35.281 CC lib/ftl/ftl_l2p_flat.o 00:04:35.281 SYMLINK libspdk_lvol.so 00:04:35.281 CC lib/ftl/ftl_nv_cache.o 00:04:35.281 CC lib/ftl/ftl_band.o 00:04:35.281 CC lib/scsi/port.o 00:04:35.281 CC lib/ftl/ftl_band_ops.o 00:04:35.281 CC lib/nbd/nbd_rpc.o 00:04:35.281 CC lib/scsi/scsi.o 00:04:35.281 CC lib/scsi/scsi_bdev.o 00:04:35.539 CC lib/ftl/ftl_writer.o 00:04:35.539 CC lib/scsi/scsi_pr.o 00:04:35.539 CC lib/ublk/ublk_rpc.o 00:04:35.539 CC lib/nvmf/ctrlr_discovery.o 00:04:35.539 LIB libspdk_nbd.a 00:04:35.539 SO libspdk_nbd.so.7.0 00:04:35.539 CC lib/ftl/ftl_rq.o 00:04:35.539 LIB libspdk_ublk.a 00:04:35.539 SYMLINK libspdk_nbd.so 00:04:35.539 CC lib/ftl/ftl_reloc.o 00:04:35.539 SO libspdk_ublk.so.3.0 00:04:35.539 SYMLINK libspdk_ublk.so 00:04:35.539 CC lib/ftl/ftl_l2p_cache.o 00:04:35.539 CC lib/scsi/scsi_rpc.o 00:04:35.539 CC lib/ftl/ftl_p2l.o 00:04:35.799 CC lib/nvmf/ctrlr_bdev.o 00:04:35.799 CC lib/scsi/task.o 00:04:35.799 CC lib/nvmf/subsystem.o 00:04:35.799 CC lib/nvmf/nvmf.o 00:04:35.799 CC lib/ftl/ftl_p2l_log.o 00:04:36.059 CC lib/nvmf/nvmf_rpc.o 00:04:36.059 LIB libspdk_scsi.a 00:04:36.059 SO libspdk_scsi.so.9.0 00:04:36.059 CC lib/nvmf/transport.o 00:04:36.059 CC lib/ftl/mngt/ftl_mngt.o 00:04:36.059 SYMLINK libspdk_scsi.so 00:04:36.059 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:36.318 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:36.318 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:36.318 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:36.318 CC lib/nvmf/tcp.o 00:04:36.318 CC lib/nvmf/stubs.o 00:04:36.318 CC lib/nvmf/mdns_server.o 00:04:36.576 CC lib/iscsi/conn.o 00:04:36.576 CC lib/iscsi/init_grp.o 00:04:36.576 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:36.576 CC lib/nvmf/rdma.o 00:04:36.576 CC lib/nvmf/auth.o 00:04:36.835 CC lib/iscsi/iscsi.o 00:04:36.835 CC lib/iscsi/param.o 00:04:36.835 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:36.835 CC lib/vhost/vhost.o 00:04:36.835 CC lib/vhost/vhost_rpc.o 00:04:37.093 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:37.093 CC lib/vhost/vhost_scsi.o 00:04:37.093 CC lib/vhost/vhost_blk.o 00:04:37.093 CC lib/vhost/rte_vhost_user.o 00:04:37.093 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:37.351 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:37.351 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:37.351 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:37.609 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:37.609 CC lib/ftl/utils/ftl_conf.o 00:04:37.609 CC lib/ftl/utils/ftl_md.o 00:04:37.609 CC lib/iscsi/portal_grp.o 00:04:37.609 CC lib/iscsi/tgt_node.o 00:04:37.609 CC lib/iscsi/iscsi_subsystem.o 00:04:37.867 CC lib/iscsi/iscsi_rpc.o 00:04:37.867 CC lib/iscsi/task.o 00:04:37.867 CC lib/ftl/utils/ftl_mempool.o 00:04:37.867 CC lib/ftl/utils/ftl_bitmap.o 00:04:37.867 LIB libspdk_vhost.a 00:04:37.867 CC lib/ftl/utils/ftl_property.o 00:04:37.867 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:37.867 SO libspdk_vhost.so.8.0 00:04:38.126 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:38.126 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:38.126 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:38.126 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:38.126 SYMLINK libspdk_vhost.so 00:04:38.126 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:38.126 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:38.126 LIB libspdk_iscsi.a 00:04:38.126 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:38.126 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:38.126 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:38.126 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:38.126 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:38.126 SO libspdk_iscsi.so.8.0 00:04:38.126 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:38.126 CC lib/ftl/base/ftl_base_dev.o 00:04:38.126 CC lib/ftl/base/ftl_base_bdev.o 00:04:38.126 CC lib/ftl/ftl_trace.o 00:04:38.385 SYMLINK libspdk_iscsi.so 00:04:38.385 LIB libspdk_nvmf.a 00:04:38.385 LIB libspdk_ftl.a 00:04:38.385 SO libspdk_nvmf.so.20.0 00:04:38.644 SYMLINK libspdk_nvmf.so 00:04:38.644 SO libspdk_ftl.so.9.0 00:04:38.902 SYMLINK libspdk_ftl.so 00:04:39.161 CC module/env_dpdk/env_dpdk_rpc.o 00:04:39.161 CC module/keyring/linux/keyring.o 00:04:39.161 CC module/keyring/file/keyring.o 00:04:39.161 CC module/fsdev/aio/fsdev_aio.o 00:04:39.161 CC module/sock/posix/posix.o 00:04:39.161 CC module/blob/bdev/blob_bdev.o 00:04:39.161 CC module/scheduler/gscheduler/gscheduler.o 00:04:39.161 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:39.161 CC module/accel/error/accel_error.o 00:04:39.161 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:39.161 LIB libspdk_env_dpdk_rpc.a 00:04:39.161 SO libspdk_env_dpdk_rpc.so.6.0 00:04:39.161 LIB libspdk_scheduler_gscheduler.a 00:04:39.161 LIB libspdk_scheduler_dpdk_governor.a 00:04:39.161 SO libspdk_scheduler_gscheduler.so.4.0 00:04:39.161 CC module/keyring/linux/keyring_rpc.o 00:04:39.161 SYMLINK libspdk_env_dpdk_rpc.so 00:04:39.161 CC module/keyring/file/keyring_rpc.o 00:04:39.161 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:39.161 SYMLINK libspdk_scheduler_gscheduler.so 00:04:39.161 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:39.419 LIB libspdk_scheduler_dynamic.a 00:04:39.419 CC module/accel/error/accel_error_rpc.o 00:04:39.419 LIB libspdk_blob_bdev.a 00:04:39.419 SO libspdk_scheduler_dynamic.so.4.0 00:04:39.419 LIB libspdk_keyring_linux.a 00:04:39.419 SO libspdk_blob_bdev.so.12.0 00:04:39.419 LIB libspdk_keyring_file.a 00:04:39.419 SO libspdk_keyring_linux.so.1.0 00:04:39.419 CC module/accel/ioat/accel_ioat.o 00:04:39.419 SO libspdk_keyring_file.so.2.0 00:04:39.419 SYMLINK libspdk_scheduler_dynamic.so 00:04:39.419 CC module/accel/dsa/accel_dsa.o 00:04:39.419 CC module/accel/dsa/accel_dsa_rpc.o 00:04:39.419 CC module/accel/iaa/accel_iaa.o 00:04:39.419 SYMLINK libspdk_blob_bdev.so 00:04:39.419 SYMLINK libspdk_keyring_linux.so 00:04:39.419 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:39.419 SYMLINK libspdk_keyring_file.so 00:04:39.419 CC module/fsdev/aio/linux_aio_mgr.o 00:04:39.419 LIB libspdk_accel_error.a 00:04:39.419 SO libspdk_accel_error.so.2.0 00:04:39.419 CC module/accel/ioat/accel_ioat_rpc.o 00:04:39.419 SYMLINK libspdk_accel_error.so 00:04:39.676 CC module/accel/iaa/accel_iaa_rpc.o 00:04:39.676 LIB libspdk_fsdev_aio.a 00:04:39.676 LIB libspdk_accel_ioat.a 00:04:39.676 LIB libspdk_sock_posix.a 00:04:39.676 LIB libspdk_accel_iaa.a 00:04:39.676 SO libspdk_fsdev_aio.so.1.0 00:04:39.676 SO libspdk_accel_ioat.so.6.0 00:04:39.676 LIB libspdk_accel_dsa.a 00:04:39.676 CC module/bdev/delay/vbdev_delay.o 00:04:39.676 SO libspdk_sock_posix.so.6.0 00:04:39.676 SO libspdk_accel_iaa.so.3.0 00:04:39.676 SO libspdk_accel_dsa.so.5.0 00:04:39.676 SYMLINK libspdk_accel_ioat.so 00:04:39.676 CC module/bdev/error/vbdev_error.o 00:04:39.676 SYMLINK libspdk_fsdev_aio.so 00:04:39.676 CC module/bdev/gpt/gpt.o 00:04:39.676 CC module/bdev/error/vbdev_error_rpc.o 00:04:39.676 SYMLINK libspdk_accel_dsa.so 00:04:39.676 SYMLINK libspdk_accel_iaa.so 00:04:39.676 CC module/blobfs/bdev/blobfs_bdev.o 00:04:39.676 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:39.676 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:39.676 CC module/bdev/lvol/vbdev_lvol.o 00:04:39.676 SYMLINK libspdk_sock_posix.so 00:04:39.676 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:39.936 CC module/bdev/malloc/bdev_malloc.o 00:04:39.936 CC module/bdev/gpt/vbdev_gpt.o 00:04:39.936 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:39.936 LIB libspdk_blobfs_bdev.a 00:04:39.936 LIB libspdk_bdev_error.a 00:04:39.936 SO libspdk_blobfs_bdev.so.6.0 00:04:39.936 SO libspdk_bdev_error.so.6.0 00:04:39.936 SYMLINK libspdk_blobfs_bdev.so 00:04:39.936 CC module/bdev/null/bdev_null.o 00:04:39.936 SYMLINK libspdk_bdev_error.so 00:04:39.936 CC module/bdev/null/bdev_null_rpc.o 00:04:39.936 LIB libspdk_bdev_delay.a 00:04:39.936 CC module/bdev/nvme/bdev_nvme.o 00:04:39.936 SO libspdk_bdev_delay.so.6.0 00:04:40.205 SYMLINK libspdk_bdev_delay.so 00:04:40.205 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:40.205 CC module/bdev/nvme/nvme_rpc.o 00:04:40.205 CC module/bdev/passthru/vbdev_passthru.o 00:04:40.205 LIB libspdk_bdev_malloc.a 00:04:40.205 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:40.205 CC module/bdev/raid/bdev_raid.o 00:04:40.205 LIB libspdk_bdev_gpt.a 00:04:40.205 SO libspdk_bdev_malloc.so.6.0 00:04:40.205 SO libspdk_bdev_gpt.so.6.0 00:04:40.205 SYMLINK libspdk_bdev_malloc.so 00:04:40.205 SYMLINK libspdk_bdev_gpt.so 00:04:40.205 LIB libspdk_bdev_lvol.a 00:04:40.205 LIB libspdk_bdev_null.a 00:04:40.205 SO libspdk_bdev_lvol.so.6.0 00:04:40.205 SO libspdk_bdev_null.so.6.0 00:04:40.206 CC module/bdev/nvme/bdev_mdns_client.o 00:04:40.206 CC module/bdev/nvme/vbdev_opal.o 00:04:40.206 SYMLINK libspdk_bdev_lvol.so 00:04:40.206 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:40.206 SYMLINK libspdk_bdev_null.so 00:04:40.469 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:40.469 CC module/bdev/split/vbdev_split.o 00:04:40.469 LIB libspdk_bdev_passthru.a 00:04:40.469 CC module/bdev/split/vbdev_split_rpc.o 00:04:40.469 CC module/bdev/xnvme/bdev_xnvme.o 00:04:40.469 SO libspdk_bdev_passthru.so.6.0 00:04:40.469 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:40.469 SYMLINK libspdk_bdev_passthru.so 00:04:40.469 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:40.469 LIB libspdk_bdev_split.a 00:04:40.469 SO libspdk_bdev_split.so.6.0 00:04:40.729 CC module/bdev/ftl/bdev_ftl.o 00:04:40.729 CC module/bdev/aio/bdev_aio.o 00:04:40.729 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:40.729 LIB libspdk_bdev_xnvme.a 00:04:40.729 CC module/bdev/aio/bdev_aio_rpc.o 00:04:40.729 SYMLINK libspdk_bdev_split.so 00:04:40.729 SO libspdk_bdev_xnvme.so.3.0 00:04:40.729 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:40.729 CC module/bdev/iscsi/bdev_iscsi.o 00:04:40.729 SYMLINK libspdk_bdev_xnvme.so 00:04:40.729 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:40.729 LIB libspdk_bdev_zone_block.a 00:04:40.729 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:40.729 SO libspdk_bdev_zone_block.so.6.0 00:04:40.729 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:40.729 CC module/bdev/raid/bdev_raid_rpc.o 00:04:40.729 LIB libspdk_bdev_ftl.a 00:04:40.988 SO libspdk_bdev_ftl.so.6.0 00:04:40.988 LIB libspdk_bdev_aio.a 00:04:40.988 SYMLINK libspdk_bdev_zone_block.so 00:04:40.988 SYMLINK libspdk_bdev_ftl.so 00:04:40.988 CC module/bdev/raid/bdev_raid_sb.o 00:04:40.988 CC module/bdev/raid/raid0.o 00:04:40.988 CC module/bdev/raid/raid1.o 00:04:40.988 SO libspdk_bdev_aio.so.6.0 00:04:40.988 SYMLINK libspdk_bdev_aio.so 00:04:40.988 CC module/bdev/raid/concat.o 00:04:40.988 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:40.988 LIB libspdk_bdev_iscsi.a 00:04:40.988 SO libspdk_bdev_iscsi.so.6.0 00:04:41.246 SYMLINK libspdk_bdev_iscsi.so 00:04:41.246 LIB libspdk_bdev_raid.a 00:04:41.246 LIB libspdk_bdev_virtio.a 00:04:41.246 SO libspdk_bdev_raid.so.6.0 00:04:41.246 SO libspdk_bdev_virtio.so.6.0 00:04:41.246 SYMLINK libspdk_bdev_virtio.so 00:04:41.246 SYMLINK libspdk_bdev_raid.so 00:04:42.181 LIB libspdk_bdev_nvme.a 00:04:42.181 SO libspdk_bdev_nvme.so.7.1 00:04:42.181 SYMLINK libspdk_bdev_nvme.so 00:04:42.760 CC module/event/subsystems/vmd/vmd.o 00:04:42.760 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:42.760 CC module/event/subsystems/sock/sock.o 00:04:42.760 CC module/event/subsystems/scheduler/scheduler.o 00:04:42.760 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:42.760 CC module/event/subsystems/iobuf/iobuf.o 00:04:42.760 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:42.760 CC module/event/subsystems/fsdev/fsdev.o 00:04:42.760 CC module/event/subsystems/keyring/keyring.o 00:04:42.760 LIB libspdk_event_sock.a 00:04:42.760 LIB libspdk_event_scheduler.a 00:04:42.760 LIB libspdk_event_vmd.a 00:04:42.760 LIB libspdk_event_fsdev.a 00:04:42.760 LIB libspdk_event_vhost_blk.a 00:04:42.760 SO libspdk_event_scheduler.so.4.0 00:04:42.760 SO libspdk_event_sock.so.5.0 00:04:42.760 LIB libspdk_event_keyring.a 00:04:42.760 LIB libspdk_event_iobuf.a 00:04:42.760 SO libspdk_event_fsdev.so.1.0 00:04:42.760 SO libspdk_event_vmd.so.6.0 00:04:42.760 SO libspdk_event_vhost_blk.so.3.0 00:04:42.760 SO libspdk_event_keyring.so.1.0 00:04:42.760 SYMLINK libspdk_event_scheduler.so 00:04:42.760 SO libspdk_event_iobuf.so.3.0 00:04:42.760 SYMLINK libspdk_event_sock.so 00:04:42.760 SYMLINK libspdk_event_fsdev.so 00:04:42.760 SYMLINK libspdk_event_vhost_blk.so 00:04:42.760 SYMLINK libspdk_event_keyring.so 00:04:42.760 SYMLINK libspdk_event_vmd.so 00:04:42.760 SYMLINK libspdk_event_iobuf.so 00:04:43.045 CC module/event/subsystems/accel/accel.o 00:04:43.045 LIB libspdk_event_accel.a 00:04:43.304 SO libspdk_event_accel.so.6.0 00:04:43.304 SYMLINK libspdk_event_accel.so 00:04:43.565 CC module/event/subsystems/bdev/bdev.o 00:04:43.565 LIB libspdk_event_bdev.a 00:04:43.565 SO libspdk_event_bdev.so.6.0 00:04:43.565 SYMLINK libspdk_event_bdev.so 00:04:43.826 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:43.826 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:43.826 CC module/event/subsystems/ublk/ublk.o 00:04:43.826 CC module/event/subsystems/scsi/scsi.o 00:04:43.826 CC module/event/subsystems/nbd/nbd.o 00:04:44.087 LIB libspdk_event_ublk.a 00:04:44.087 LIB libspdk_event_nbd.a 00:04:44.087 LIB libspdk_event_scsi.a 00:04:44.087 SO libspdk_event_ublk.so.3.0 00:04:44.087 SO libspdk_event_nbd.so.6.0 00:04:44.087 SO libspdk_event_scsi.so.6.0 00:04:44.087 SYMLINK libspdk_event_ublk.so 00:04:44.087 SYMLINK libspdk_event_nbd.so 00:04:44.087 LIB libspdk_event_nvmf.a 00:04:44.087 SYMLINK libspdk_event_scsi.so 00:04:44.087 SO libspdk_event_nvmf.so.6.0 00:04:44.087 SYMLINK libspdk_event_nvmf.so 00:04:44.346 CC module/event/subsystems/iscsi/iscsi.o 00:04:44.346 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:44.346 LIB libspdk_event_iscsi.a 00:04:44.346 LIB libspdk_event_vhost_scsi.a 00:04:44.346 SO libspdk_event_iscsi.so.6.0 00:04:44.346 SO libspdk_event_vhost_scsi.so.3.0 00:04:44.346 SYMLINK libspdk_event_iscsi.so 00:04:44.346 SYMLINK libspdk_event_vhost_scsi.so 00:04:44.606 SO libspdk.so.6.0 00:04:44.606 SYMLINK libspdk.so 00:04:44.606 CXX app/trace/trace.o 00:04:44.606 TEST_HEADER include/spdk/accel.h 00:04:44.865 CC app/trace_record/trace_record.o 00:04:44.865 TEST_HEADER include/spdk/accel_module.h 00:04:44.865 TEST_HEADER include/spdk/assert.h 00:04:44.865 TEST_HEADER include/spdk/barrier.h 00:04:44.865 TEST_HEADER include/spdk/base64.h 00:04:44.865 TEST_HEADER include/spdk/bdev.h 00:04:44.865 TEST_HEADER include/spdk/bdev_module.h 00:04:44.865 TEST_HEADER include/spdk/bdev_zone.h 00:04:44.865 TEST_HEADER include/spdk/bit_array.h 00:04:44.865 TEST_HEADER include/spdk/bit_pool.h 00:04:44.865 TEST_HEADER include/spdk/blob_bdev.h 00:04:44.865 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:44.865 TEST_HEADER include/spdk/blobfs.h 00:04:44.865 TEST_HEADER include/spdk/blob.h 00:04:44.865 TEST_HEADER include/spdk/conf.h 00:04:44.865 TEST_HEADER include/spdk/config.h 00:04:44.865 TEST_HEADER include/spdk/cpuset.h 00:04:44.865 TEST_HEADER include/spdk/crc16.h 00:04:44.865 TEST_HEADER include/spdk/crc32.h 00:04:44.865 TEST_HEADER include/spdk/crc64.h 00:04:44.865 CC app/iscsi_tgt/iscsi_tgt.o 00:04:44.865 TEST_HEADER include/spdk/dif.h 00:04:44.865 CC app/nvmf_tgt/nvmf_main.o 00:04:44.865 TEST_HEADER include/spdk/dma.h 00:04:44.865 TEST_HEADER include/spdk/endian.h 00:04:44.865 TEST_HEADER include/spdk/env_dpdk.h 00:04:44.865 TEST_HEADER include/spdk/env.h 00:04:44.865 TEST_HEADER include/spdk/event.h 00:04:44.865 TEST_HEADER include/spdk/fd_group.h 00:04:44.865 TEST_HEADER include/spdk/fd.h 00:04:44.865 TEST_HEADER include/spdk/file.h 00:04:44.865 TEST_HEADER include/spdk/fsdev.h 00:04:44.865 TEST_HEADER include/spdk/fsdev_module.h 00:04:44.865 CC examples/util/zipf/zipf.o 00:04:44.865 TEST_HEADER include/spdk/ftl.h 00:04:44.865 CC test/thread/poller_perf/poller_perf.o 00:04:44.865 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:44.865 TEST_HEADER include/spdk/gpt_spec.h 00:04:44.865 TEST_HEADER include/spdk/hexlify.h 00:04:44.865 TEST_HEADER include/spdk/histogram_data.h 00:04:44.865 TEST_HEADER include/spdk/idxd.h 00:04:44.865 TEST_HEADER include/spdk/idxd_spec.h 00:04:44.865 TEST_HEADER include/spdk/init.h 00:04:44.865 TEST_HEADER include/spdk/ioat.h 00:04:44.865 TEST_HEADER include/spdk/ioat_spec.h 00:04:44.865 TEST_HEADER include/spdk/iscsi_spec.h 00:04:44.865 TEST_HEADER include/spdk/json.h 00:04:44.865 TEST_HEADER include/spdk/jsonrpc.h 00:04:44.865 TEST_HEADER include/spdk/keyring.h 00:04:44.865 TEST_HEADER include/spdk/keyring_module.h 00:04:44.865 TEST_HEADER include/spdk/likely.h 00:04:44.865 TEST_HEADER include/spdk/log.h 00:04:44.866 TEST_HEADER include/spdk/lvol.h 00:04:44.866 TEST_HEADER include/spdk/md5.h 00:04:44.866 TEST_HEADER include/spdk/memory.h 00:04:44.866 TEST_HEADER include/spdk/mmio.h 00:04:44.866 TEST_HEADER include/spdk/nbd.h 00:04:44.866 TEST_HEADER include/spdk/net.h 00:04:44.866 TEST_HEADER include/spdk/notify.h 00:04:44.866 TEST_HEADER include/spdk/nvme.h 00:04:44.866 TEST_HEADER include/spdk/nvme_intel.h 00:04:44.866 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:44.866 CC app/spdk_tgt/spdk_tgt.o 00:04:44.866 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:44.866 TEST_HEADER include/spdk/nvme_spec.h 00:04:44.866 TEST_HEADER include/spdk/nvme_zns.h 00:04:44.866 CC test/app/bdev_svc/bdev_svc.o 00:04:44.866 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:44.866 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:44.866 TEST_HEADER include/spdk/nvmf.h 00:04:44.866 TEST_HEADER include/spdk/nvmf_spec.h 00:04:44.866 TEST_HEADER include/spdk/nvmf_transport.h 00:04:44.866 TEST_HEADER include/spdk/opal.h 00:04:44.866 TEST_HEADER include/spdk/opal_spec.h 00:04:44.866 TEST_HEADER include/spdk/pci_ids.h 00:04:44.866 TEST_HEADER include/spdk/pipe.h 00:04:44.866 CC test/dma/test_dma/test_dma.o 00:04:44.866 TEST_HEADER include/spdk/queue.h 00:04:44.866 TEST_HEADER include/spdk/reduce.h 00:04:44.866 TEST_HEADER include/spdk/rpc.h 00:04:44.866 TEST_HEADER include/spdk/scheduler.h 00:04:44.866 TEST_HEADER include/spdk/scsi.h 00:04:44.866 TEST_HEADER include/spdk/scsi_spec.h 00:04:44.866 TEST_HEADER include/spdk/sock.h 00:04:44.866 TEST_HEADER include/spdk/stdinc.h 00:04:44.866 TEST_HEADER include/spdk/string.h 00:04:44.866 TEST_HEADER include/spdk/thread.h 00:04:44.866 TEST_HEADER include/spdk/trace.h 00:04:44.866 TEST_HEADER include/spdk/trace_parser.h 00:04:44.866 TEST_HEADER include/spdk/tree.h 00:04:44.866 TEST_HEADER include/spdk/ublk.h 00:04:44.866 TEST_HEADER include/spdk/util.h 00:04:44.866 TEST_HEADER include/spdk/uuid.h 00:04:44.866 TEST_HEADER include/spdk/version.h 00:04:44.866 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:44.866 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:44.866 TEST_HEADER include/spdk/vhost.h 00:04:44.866 TEST_HEADER include/spdk/vmd.h 00:04:44.866 TEST_HEADER include/spdk/xor.h 00:04:44.866 TEST_HEADER include/spdk/zipf.h 00:04:44.866 CXX test/cpp_headers/accel.o 00:04:44.866 LINK poller_perf 00:04:44.866 LINK zipf 00:04:44.866 LINK nvmf_tgt 00:04:44.866 LINK spdk_trace_record 00:04:44.866 LINK iscsi_tgt 00:04:45.124 LINK bdev_svc 00:04:45.124 LINK spdk_tgt 00:04:45.124 CXX test/cpp_headers/accel_module.o 00:04:45.124 LINK spdk_trace 00:04:45.124 CC app/spdk_lspci/spdk_lspci.o 00:04:45.124 CXX test/cpp_headers/assert.o 00:04:45.124 CC app/spdk_nvme_perf/perf.o 00:04:45.124 CC examples/ioat/perf/perf.o 00:04:45.124 CC examples/ioat/verify/verify.o 00:04:45.124 CC examples/vmd/lsvmd/lsvmd.o 00:04:45.382 LINK spdk_lspci 00:04:45.382 CC app/spdk_nvme_discover/discovery_aer.o 00:04:45.382 CC app/spdk_nvme_identify/identify.o 00:04:45.382 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:45.382 LINK test_dma 00:04:45.382 CXX test/cpp_headers/barrier.o 00:04:45.382 LINK lsvmd 00:04:45.382 CXX test/cpp_headers/base64.o 00:04:45.382 LINK ioat_perf 00:04:45.382 LINK verify 00:04:45.382 LINK spdk_nvme_discover 00:04:45.640 CXX test/cpp_headers/bdev.o 00:04:45.640 CC examples/vmd/led/led.o 00:04:45.640 CC app/spdk_top/spdk_top.o 00:04:45.640 LINK nvme_fuzz 00:04:45.640 CC app/vhost/vhost.o 00:04:45.640 CC app/spdk_dd/spdk_dd.o 00:04:45.640 CC test/app/histogram_perf/histogram_perf.o 00:04:45.640 CXX test/cpp_headers/bdev_module.o 00:04:45.640 LINK led 00:04:45.640 CC app/fio/nvme/fio_plugin.o 00:04:45.898 LINK vhost 00:04:45.898 LINK histogram_perf 00:04:45.898 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:45.898 CXX test/cpp_headers/bdev_zone.o 00:04:45.898 LINK spdk_dd 00:04:45.898 CC examples/idxd/perf/perf.o 00:04:45.898 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:46.157 CXX test/cpp_headers/bit_array.o 00:04:46.157 LINK spdk_nvme_identify 00:04:46.157 LINK spdk_nvme_perf 00:04:46.157 CXX test/cpp_headers/bit_pool.o 00:04:46.157 CC examples/thread/thread/thread_ex.o 00:04:46.157 LINK interrupt_tgt 00:04:46.157 CC test/app/jsoncat/jsoncat.o 00:04:46.157 CXX test/cpp_headers/blob_bdev.o 00:04:46.157 LINK spdk_nvme 00:04:46.157 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:46.414 LINK idxd_perf 00:04:46.414 CC examples/sock/hello_world/hello_sock.o 00:04:46.414 CXX test/cpp_headers/blobfs_bdev.o 00:04:46.414 LINK jsoncat 00:04:46.414 LINK thread 00:04:46.414 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:46.414 CXX test/cpp_headers/blobfs.o 00:04:46.414 CC app/fio/bdev/fio_plugin.o 00:04:46.414 LINK spdk_top 00:04:46.414 CXX test/cpp_headers/blob.o 00:04:46.414 CC test/app/stub/stub.o 00:04:46.672 CXX test/cpp_headers/conf.o 00:04:46.672 LINK hello_sock 00:04:46.672 CC test/env/mem_callbacks/mem_callbacks.o 00:04:46.672 CC test/event/event_perf/event_perf.o 00:04:46.672 CC test/event/reactor/reactor.o 00:04:46.672 CC test/env/vtophys/vtophys.o 00:04:46.672 CXX test/cpp_headers/config.o 00:04:46.672 LINK stub 00:04:46.672 CXX test/cpp_headers/cpuset.o 00:04:46.672 LINK mem_callbacks 00:04:46.672 LINK vhost_fuzz 00:04:46.672 LINK event_perf 00:04:46.672 LINK reactor 00:04:46.930 LINK vtophys 00:04:46.930 CC examples/accel/perf/accel_perf.o 00:04:46.930 CXX test/cpp_headers/crc16.o 00:04:46.930 CC test/event/reactor_perf/reactor_perf.o 00:04:46.930 LINK spdk_bdev 00:04:46.930 CC test/event/app_repeat/app_repeat.o 00:04:46.930 CXX test/cpp_headers/crc32.o 00:04:46.930 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:46.930 CC test/event/scheduler/scheduler.o 00:04:46.930 CC test/rpc_client/rpc_client_test.o 00:04:46.930 LINK reactor_perf 00:04:46.930 CC test/nvme/aer/aer.o 00:04:47.188 LINK app_repeat 00:04:47.188 LINK env_dpdk_post_init 00:04:47.188 CXX test/cpp_headers/crc64.o 00:04:47.188 LINK rpc_client_test 00:04:47.188 CC test/accel/dif/dif.o 00:04:47.188 LINK scheduler 00:04:47.188 CXX test/cpp_headers/dif.o 00:04:47.188 CC test/blobfs/mkfs/mkfs.o 00:04:47.188 LINK aer 00:04:47.188 CC test/env/memory/memory_ut.o 00:04:47.188 LINK iscsi_fuzz 00:04:47.188 CXX test/cpp_headers/dma.o 00:04:47.188 LINK accel_perf 00:04:47.445 CC test/env/pci/pci_ut.o 00:04:47.445 CC test/lvol/esnap/esnap.o 00:04:47.445 CC test/nvme/reset/reset.o 00:04:47.445 LINK mkfs 00:04:47.445 CXX test/cpp_headers/endian.o 00:04:47.445 CC examples/blob/hello_world/hello_blob.o 00:04:47.445 CC examples/nvme/hello_world/hello_world.o 00:04:47.445 CXX test/cpp_headers/env_dpdk.o 00:04:47.703 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:47.703 LINK reset 00:04:47.703 CC examples/nvme/reconnect/reconnect.o 00:04:47.703 CXX test/cpp_headers/env.o 00:04:47.703 LINK hello_world 00:04:47.703 LINK hello_blob 00:04:47.703 LINK pci_ut 00:04:47.703 CXX test/cpp_headers/event.o 00:04:47.961 LINK hello_fsdev 00:04:47.961 CC test/nvme/sgl/sgl.o 00:04:47.961 LINK dif 00:04:47.961 CC test/nvme/e2edp/nvme_dp.o 00:04:47.961 CC examples/blob/cli/blobcli.o 00:04:47.961 LINK reconnect 00:04:47.961 CXX test/cpp_headers/fd_group.o 00:04:47.961 CC test/nvme/overhead/overhead.o 00:04:47.961 CC test/nvme/err_injection/err_injection.o 00:04:47.961 CXX test/cpp_headers/fd.o 00:04:47.961 LINK memory_ut 00:04:48.219 CC examples/nvme/arbitration/arbitration.o 00:04:48.219 LINK sgl 00:04:48.219 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:48.219 LINK err_injection 00:04:48.219 CXX test/cpp_headers/file.o 00:04:48.219 LINK nvme_dp 00:04:48.219 CXX test/cpp_headers/fsdev.o 00:04:48.219 LINK overhead 00:04:48.219 CXX test/cpp_headers/fsdev_module.o 00:04:48.219 CXX test/cpp_headers/ftl.o 00:04:48.219 CXX test/cpp_headers/fuse_dispatcher.o 00:04:48.219 CC examples/nvme/hotplug/hotplug.o 00:04:48.477 CC test/nvme/startup/startup.o 00:04:48.477 CXX test/cpp_headers/gpt_spec.o 00:04:48.477 LINK blobcli 00:04:48.477 LINK arbitration 00:04:48.477 CC test/nvme/simple_copy/simple_copy.o 00:04:48.477 CC test/nvme/reserve/reserve.o 00:04:48.477 CC test/nvme/connect_stress/connect_stress.o 00:04:48.477 LINK hotplug 00:04:48.477 CXX test/cpp_headers/hexlify.o 00:04:48.478 LINK startup 00:04:48.478 CXX test/cpp_headers/histogram_data.o 00:04:48.478 LINK connect_stress 00:04:48.478 LINK reserve 00:04:48.735 LINK nvme_manage 00:04:48.735 CXX test/cpp_headers/idxd.o 00:04:48.735 CC test/nvme/boot_partition/boot_partition.o 00:04:48.735 LINK simple_copy 00:04:48.735 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:48.735 CXX test/cpp_headers/idxd_spec.o 00:04:48.735 CXX test/cpp_headers/init.o 00:04:48.735 CC examples/nvme/abort/abort.o 00:04:48.735 CXX test/cpp_headers/ioat.o 00:04:48.735 CXX test/cpp_headers/ioat_spec.o 00:04:48.735 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:48.735 CXX test/cpp_headers/iscsi_spec.o 00:04:48.735 LINK boot_partition 00:04:48.735 LINK cmb_copy 00:04:48.993 CXX test/cpp_headers/json.o 00:04:48.993 CXX test/cpp_headers/jsonrpc.o 00:04:48.993 CC test/nvme/compliance/nvme_compliance.o 00:04:48.993 LINK pmr_persistence 00:04:48.993 CC test/nvme/fused_ordering/fused_ordering.o 00:04:48.993 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:48.993 CC test/bdev/bdevio/bdevio.o 00:04:48.993 CXX test/cpp_headers/keyring.o 00:04:48.993 CXX test/cpp_headers/keyring_module.o 00:04:48.993 CC test/nvme/fdp/fdp.o 00:04:48.993 LINK abort 00:04:48.993 LINK fused_ordering 00:04:48.993 LINK doorbell_aers 00:04:49.252 LINK nvme_compliance 00:04:49.252 CXX test/cpp_headers/likely.o 00:04:49.252 CC examples/bdev/hello_world/hello_bdev.o 00:04:49.252 CXX test/cpp_headers/log.o 00:04:49.252 CXX test/cpp_headers/lvol.o 00:04:49.252 CXX test/cpp_headers/md5.o 00:04:49.252 CC test/nvme/cuse/cuse.o 00:04:49.252 CC examples/bdev/bdevperf/bdevperf.o 00:04:49.252 CXX test/cpp_headers/memory.o 00:04:49.252 LINK fdp 00:04:49.252 LINK bdevio 00:04:49.252 CXX test/cpp_headers/mmio.o 00:04:49.252 CXX test/cpp_headers/nbd.o 00:04:49.509 LINK hello_bdev 00:04:49.509 CXX test/cpp_headers/net.o 00:04:49.509 CXX test/cpp_headers/notify.o 00:04:49.509 CXX test/cpp_headers/nvme.o 00:04:49.509 CXX test/cpp_headers/nvme_intel.o 00:04:49.509 CXX test/cpp_headers/nvme_ocssd.o 00:04:49.509 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:49.509 CXX test/cpp_headers/nvme_spec.o 00:04:49.509 CXX test/cpp_headers/nvme_zns.o 00:04:49.509 CXX test/cpp_headers/nvmf_cmd.o 00:04:49.509 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:49.509 CXX test/cpp_headers/nvmf.o 00:04:49.509 CXX test/cpp_headers/nvmf_spec.o 00:04:49.509 CXX test/cpp_headers/nvmf_transport.o 00:04:49.766 CXX test/cpp_headers/opal.o 00:04:49.766 CXX test/cpp_headers/opal_spec.o 00:04:49.766 CXX test/cpp_headers/pci_ids.o 00:04:49.766 CXX test/cpp_headers/pipe.o 00:04:49.766 CXX test/cpp_headers/queue.o 00:04:49.766 CXX test/cpp_headers/reduce.o 00:04:49.766 CXX test/cpp_headers/rpc.o 00:04:49.766 CXX test/cpp_headers/scheduler.o 00:04:49.766 CXX test/cpp_headers/scsi.o 00:04:49.766 CXX test/cpp_headers/scsi_spec.o 00:04:49.766 CXX test/cpp_headers/sock.o 00:04:49.766 CXX test/cpp_headers/stdinc.o 00:04:49.766 CXX test/cpp_headers/string.o 00:04:49.766 CXX test/cpp_headers/thread.o 00:04:49.766 CXX test/cpp_headers/trace.o 00:04:50.024 CXX test/cpp_headers/trace_parser.o 00:04:50.024 CXX test/cpp_headers/tree.o 00:04:50.024 LINK bdevperf 00:04:50.024 CXX test/cpp_headers/ublk.o 00:04:50.024 CXX test/cpp_headers/util.o 00:04:50.024 CXX test/cpp_headers/uuid.o 00:04:50.024 CXX test/cpp_headers/version.o 00:04:50.024 CXX test/cpp_headers/vfio_user_pci.o 00:04:50.024 CXX test/cpp_headers/vfio_user_spec.o 00:04:50.024 CXX test/cpp_headers/vhost.o 00:04:50.024 CXX test/cpp_headers/vmd.o 00:04:50.024 CXX test/cpp_headers/xor.o 00:04:50.024 CXX test/cpp_headers/zipf.o 00:04:50.282 CC examples/nvmf/nvmf/nvmf.o 00:04:50.282 LINK cuse 00:04:50.543 LINK nvmf 00:04:52.454 LINK esnap 00:04:52.713 00:04:52.713 real 1m0.525s 00:04:52.713 user 4m58.382s 00:04:52.713 sys 0m48.881s 00:04:52.713 21:37:15 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:52.713 ************************************ 00:04:52.713 END TEST make 00:04:52.713 ************************************ 00:04:52.713 21:37:15 make -- common/autotest_common.sh@10 -- $ set +x 00:04:52.713 21:37:15 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:52.713 21:37:15 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:52.713 21:37:15 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:52.713 21:37:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:52.713 21:37:15 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:52.713 21:37:15 -- pm/common@44 -- $ pid=5808 00:04:52.713 21:37:15 -- pm/common@50 -- $ kill -TERM 5808 00:04:52.713 21:37:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:52.713 21:37:15 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:52.713 21:37:15 -- pm/common@44 -- $ pid=5809 00:04:52.713 21:37:15 -- pm/common@50 -- $ kill -TERM 5809 00:04:52.713 21:37:15 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:52.713 21:37:15 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:52.713 21:37:15 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:52.713 21:37:15 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:52.713 21:37:15 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:52.974 21:37:15 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:52.974 21:37:15 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.974 21:37:15 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.974 21:37:15 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.974 21:37:15 -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.974 21:37:15 -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.974 21:37:15 -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.975 21:37:15 -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.975 21:37:15 -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.975 21:37:15 -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.975 21:37:15 -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.975 21:37:15 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.975 21:37:15 -- scripts/common.sh@344 -- # case "$op" in 00:04:52.975 21:37:15 -- scripts/common.sh@345 -- # : 1 00:04:52.975 21:37:15 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.975 21:37:15 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.975 21:37:15 -- scripts/common.sh@365 -- # decimal 1 00:04:52.975 21:37:15 -- scripts/common.sh@353 -- # local d=1 00:04:52.975 21:37:15 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.975 21:37:15 -- scripts/common.sh@355 -- # echo 1 00:04:52.975 21:37:15 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.975 21:37:15 -- scripts/common.sh@366 -- # decimal 2 00:04:52.975 21:37:15 -- scripts/common.sh@353 -- # local d=2 00:04:52.975 21:37:15 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.975 21:37:15 -- scripts/common.sh@355 -- # echo 2 00:04:52.975 21:37:15 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.975 21:37:15 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.975 21:37:15 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.975 21:37:15 -- scripts/common.sh@368 -- # return 0 00:04:52.975 21:37:15 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.975 21:37:15 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:52.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.975 --rc genhtml_branch_coverage=1 00:04:52.975 --rc genhtml_function_coverage=1 00:04:52.975 --rc genhtml_legend=1 00:04:52.975 --rc geninfo_all_blocks=1 00:04:52.975 --rc geninfo_unexecuted_blocks=1 00:04:52.975 00:04:52.975 ' 00:04:52.975 21:37:15 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:52.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.975 --rc genhtml_branch_coverage=1 00:04:52.975 --rc genhtml_function_coverage=1 00:04:52.975 --rc genhtml_legend=1 00:04:52.975 --rc geninfo_all_blocks=1 00:04:52.975 --rc geninfo_unexecuted_blocks=1 00:04:52.975 00:04:52.975 ' 00:04:52.975 21:37:15 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:52.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.975 --rc genhtml_branch_coverage=1 00:04:52.975 --rc genhtml_function_coverage=1 00:04:52.975 --rc genhtml_legend=1 00:04:52.975 --rc geninfo_all_blocks=1 00:04:52.975 --rc geninfo_unexecuted_blocks=1 00:04:52.975 00:04:52.975 ' 00:04:52.975 21:37:15 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:52.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.975 --rc genhtml_branch_coverage=1 00:04:52.975 --rc genhtml_function_coverage=1 00:04:52.975 --rc genhtml_legend=1 00:04:52.975 --rc geninfo_all_blocks=1 00:04:52.975 --rc geninfo_unexecuted_blocks=1 00:04:52.975 00:04:52.975 ' 00:04:52.975 21:37:15 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:52.975 21:37:15 -- nvmf/common.sh@7 -- # uname -s 00:04:52.975 21:37:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:52.975 21:37:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:52.975 21:37:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:52.975 21:37:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:52.975 21:37:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:52.975 21:37:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:52.975 21:37:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:52.975 21:37:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:52.975 21:37:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:52.975 21:37:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:52.975 21:37:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dedf263a-1388-4f83-8c1a-6d151fbf491d 00:04:52.975 21:37:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=dedf263a-1388-4f83-8c1a-6d151fbf491d 00:04:52.975 21:37:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:52.975 21:37:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:52.975 21:37:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:52.975 21:37:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:52.975 21:37:15 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:52.975 21:37:15 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:52.975 21:37:15 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:52.975 21:37:15 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:52.975 21:37:15 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:52.975 21:37:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.975 21:37:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.975 21:37:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.975 21:37:15 -- paths/export.sh@5 -- # export PATH 00:04:52.975 21:37:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.975 21:37:15 -- nvmf/common.sh@51 -- # : 0 00:04:52.975 21:37:15 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:52.975 21:37:15 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:52.975 21:37:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:52.975 21:37:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:52.975 21:37:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:52.975 21:37:15 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:52.975 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:52.975 21:37:15 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:52.975 21:37:15 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:52.975 21:37:15 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:52.975 21:37:15 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:52.975 21:37:15 -- spdk/autotest.sh@32 -- # uname -s 00:04:52.975 21:37:15 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:52.975 21:37:15 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:52.975 21:37:15 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:52.975 21:37:15 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:52.975 21:37:15 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:52.975 21:37:15 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:52.975 21:37:15 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:52.975 21:37:15 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:52.975 21:37:15 -- spdk/autotest.sh@48 -- # udevadm_pid=66197 00:04:52.975 21:37:15 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:52.975 21:37:15 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:52.975 21:37:15 -- pm/common@17 -- # local monitor 00:04:52.975 21:37:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:52.975 21:37:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:52.975 21:37:15 -- pm/common@25 -- # sleep 1 00:04:52.975 21:37:15 -- pm/common@21 -- # date +%s 00:04:52.975 21:37:15 -- pm/common@21 -- # date +%s 00:04:52.975 21:37:15 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732743435 00:04:52.975 21:37:15 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732743435 00:04:52.975 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732743435_collect-vmstat.pm.log 00:04:52.975 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732743435_collect-cpu-load.pm.log 00:04:53.914 21:37:16 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:53.915 21:37:16 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:53.915 21:37:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:53.915 21:37:16 -- common/autotest_common.sh@10 -- # set +x 00:04:53.915 21:37:16 -- spdk/autotest.sh@59 -- # create_test_list 00:04:53.915 21:37:16 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:53.915 21:37:16 -- common/autotest_common.sh@10 -- # set +x 00:04:53.915 21:37:17 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:53.915 21:37:17 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:53.915 21:37:17 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:53.915 21:37:17 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:53.915 21:37:17 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:53.915 21:37:17 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:53.915 21:37:17 -- common/autotest_common.sh@1457 -- # uname 00:04:53.915 21:37:17 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:53.915 21:37:17 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:53.915 21:37:17 -- common/autotest_common.sh@1477 -- # uname 00:04:54.173 21:37:17 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:54.173 21:37:17 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:54.173 21:37:17 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:54.173 lcov: LCOV version 1.15 00:04:54.173 21:37:17 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:09.067 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:09.067 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:23.986 21:37:45 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:23.986 21:37:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:23.986 21:37:45 -- common/autotest_common.sh@10 -- # set +x 00:05:23.986 21:37:45 -- spdk/autotest.sh@78 -- # rm -f 00:05:23.986 21:37:45 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:23.986 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:23.986 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:23.986 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:23.986 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:23.986 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:23.986 21:37:46 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:23.986 21:37:46 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:23.986 21:37:46 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:23.986 21:37:46 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:23.986 21:37:46 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:23.986 21:37:46 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:23.986 21:37:46 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:23.986 21:37:46 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:23.986 21:37:46 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:23.986 21:37:46 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:23.986 21:37:46 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:05:23.986 21:37:46 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:23.986 21:37:46 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:23.986 21:37:46 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:23.986 21:37:46 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:23.986 21:37:46 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:05:23.986 21:37:46 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:05:23.986 21:37:46 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:23.986 21:37:46 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:23.986 21:37:46 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:23.986 21:37:46 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:05:23.986 21:37:46 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:05:23.986 21:37:46 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:23.986 21:37:46 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:23.986 21:37:46 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:23.986 21:37:46 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:05:23.986 21:37:46 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:05:23.986 21:37:46 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:23.986 21:37:46 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:23.986 21:37:46 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:23.986 21:37:46 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:05:23.986 21:37:46 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:05:23.986 21:37:46 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:23.986 21:37:46 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:23.986 21:37:46 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:23.986 21:37:46 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:05:23.986 21:37:46 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:05:23.986 21:37:46 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:23.986 21:37:46 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:23.986 21:37:46 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:23.986 21:37:46 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:23.986 21:37:46 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:23.986 21:37:46 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:23.986 21:37:46 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:23.986 21:37:46 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:23.986 No valid GPT data, bailing 00:05:23.986 21:37:46 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:23.986 21:37:46 -- scripts/common.sh@394 -- # pt= 00:05:23.987 21:37:46 -- scripts/common.sh@395 -- # return 1 00:05:23.987 21:37:46 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:23.987 1+0 records in 00:05:23.987 1+0 records out 00:05:23.987 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0315629 s, 33.2 MB/s 00:05:23.987 21:37:46 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:23.987 21:37:46 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:23.987 21:37:46 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:23.987 21:37:46 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:23.987 21:37:46 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:23.987 No valid GPT data, bailing 00:05:23.987 21:37:46 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:23.987 21:37:46 -- scripts/common.sh@394 -- # pt= 00:05:23.987 21:37:46 -- scripts/common.sh@395 -- # return 1 00:05:23.987 21:37:46 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:23.987 1+0 records in 00:05:23.987 1+0 records out 00:05:23.987 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00565852 s, 185 MB/s 00:05:23.987 21:37:46 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:23.987 21:37:46 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:23.987 21:37:46 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:23.987 21:37:46 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:23.987 21:37:46 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:23.987 No valid GPT data, bailing 00:05:23.987 21:37:46 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:23.987 21:37:46 -- scripts/common.sh@394 -- # pt= 00:05:23.987 21:37:46 -- scripts/common.sh@395 -- # return 1 00:05:23.987 21:37:46 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:23.987 1+0 records in 00:05:23.987 1+0 records out 00:05:23.987 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00540753 s, 194 MB/s 00:05:23.987 21:37:46 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:23.987 21:37:46 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:23.987 21:37:46 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:23.987 21:37:46 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:23.987 21:37:46 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:23.987 No valid GPT data, bailing 00:05:23.987 21:37:46 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:23.987 21:37:47 -- scripts/common.sh@394 -- # pt= 00:05:23.987 21:37:47 -- scripts/common.sh@395 -- # return 1 00:05:23.987 21:37:47 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:23.987 1+0 records in 00:05:23.987 1+0 records out 00:05:23.987 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00463716 s, 226 MB/s 00:05:23.987 21:37:47 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:23.987 21:37:47 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:23.987 21:37:47 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:23.987 21:37:47 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:23.987 21:37:47 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:23.987 No valid GPT data, bailing 00:05:23.987 21:37:47 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:23.987 21:37:47 -- scripts/common.sh@394 -- # pt= 00:05:23.987 21:37:47 -- scripts/common.sh@395 -- # return 1 00:05:23.987 21:37:47 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:24.248 1+0 records in 00:05:24.248 1+0 records out 00:05:24.248 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00572378 s, 183 MB/s 00:05:24.248 21:37:47 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:24.248 21:37:47 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:24.248 21:37:47 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:24.248 21:37:47 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:24.248 21:37:47 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:24.248 No valid GPT data, bailing 00:05:24.248 21:37:47 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:24.248 21:37:47 -- scripts/common.sh@394 -- # pt= 00:05:24.248 21:37:47 -- scripts/common.sh@395 -- # return 1 00:05:24.248 21:37:47 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:24.248 1+0 records in 00:05:24.248 1+0 records out 00:05:24.248 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0060158 s, 174 MB/s 00:05:24.248 21:37:47 -- spdk/autotest.sh@105 -- # sync 00:05:24.248 21:37:47 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:24.248 21:37:47 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:24.249 21:37:47 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:26.165 21:37:49 -- spdk/autotest.sh@111 -- # uname -s 00:05:26.165 21:37:49 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:26.165 21:37:49 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:26.165 21:37:49 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:26.425 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:26.999 Hugepages 00:05:26.999 node hugesize free / total 00:05:26.999 node0 1048576kB 0 / 0 00:05:26.999 node0 2048kB 0 / 0 00:05:26.999 00:05:26.999 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:26.999 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:26.999 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:27.260 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:05:27.260 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:27.260 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:27.261 21:37:50 -- spdk/autotest.sh@117 -- # uname -s 00:05:27.261 21:37:50 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:27.261 21:37:50 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:27.261 21:37:50 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:27.833 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:28.447 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:28.447 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:28.447 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:28.447 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:28.447 21:37:51 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:29.391 21:37:52 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:29.392 21:37:52 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:29.392 21:37:52 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:29.392 21:37:52 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:29.392 21:37:52 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:29.392 21:37:52 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:29.392 21:37:52 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:29.392 21:37:52 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:29.392 21:37:52 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:29.653 21:37:52 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:29.653 21:37:52 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:29.653 21:37:52 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:29.914 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:29.914 Waiting for block devices as requested 00:05:30.176 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:30.176 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:30.176 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:30.437 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:35.731 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:35.731 21:37:58 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:35.731 21:37:58 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:35.732 21:37:58 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:35.732 21:37:58 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:35.732 21:37:58 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:35.732 21:37:58 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:35.732 21:37:58 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:35.732 21:37:58 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:35.732 21:37:58 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:35.732 21:37:58 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:35.732 21:37:58 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:35.732 21:37:58 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:35.732 21:37:58 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:35.732 21:37:58 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:35.732 21:37:58 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:35.732 21:37:58 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:35.732 21:37:58 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:35.732 21:37:58 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:35.732 21:37:58 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:35.732 21:37:58 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:35.732 21:37:58 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:35.732 21:37:58 -- common/autotest_common.sh@1543 -- # continue 00:05:35.732 21:37:58 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:35.732 21:37:58 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:35.732 21:37:58 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:35.732 21:37:58 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:35.732 21:37:58 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:35.732 21:37:58 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:35.732 21:37:58 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:35.732 21:37:58 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:35.732 21:37:58 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:35.732 21:37:58 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:35.732 21:37:58 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:35.732 21:37:58 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:35.732 21:37:58 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:35.732 21:37:58 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:35.732 21:37:58 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:35.732 21:37:58 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:35.732 21:37:58 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:35.732 21:37:58 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:35.732 21:37:58 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:35.732 21:37:58 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:35.732 21:37:58 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:35.732 21:37:58 -- common/autotest_common.sh@1543 -- # continue 00:05:35.732 21:37:58 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:35.732 21:37:58 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:35.732 21:37:58 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:05:35.732 21:37:58 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:35.732 21:37:58 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:35.732 21:37:58 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:35.732 21:37:58 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:35.732 21:37:58 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:05:35.732 21:37:58 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:05:35.732 21:37:58 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:05:35.732 21:37:58 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:05:35.732 21:37:58 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:35.732 21:37:58 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:35.732 21:37:58 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:35.732 21:37:58 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:35.732 21:37:58 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:35.732 21:37:58 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:05:35.732 21:37:58 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:35.732 21:37:58 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:35.732 21:37:58 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:35.732 21:37:58 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:35.732 21:37:58 -- common/autotest_common.sh@1543 -- # continue 00:05:35.732 21:37:58 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:35.732 21:37:58 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:35.732 21:37:58 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:35.732 21:37:58 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:05:35.732 21:37:58 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:35.732 21:37:58 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:35.732 21:37:58 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:35.732 21:37:58 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:05:35.732 21:37:58 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:05:35.732 21:37:58 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:05:35.732 21:37:58 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:05:35.732 21:37:58 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:35.732 21:37:58 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:35.732 21:37:58 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:35.732 21:37:58 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:35.732 21:37:58 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:35.732 21:37:58 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:05:35.732 21:37:58 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:35.732 21:37:58 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:35.732 21:37:58 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:35.732 21:37:58 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:35.732 21:37:58 -- common/autotest_common.sh@1543 -- # continue 00:05:35.732 21:37:58 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:35.732 21:37:58 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:35.732 21:37:58 -- common/autotest_common.sh@10 -- # set +x 00:05:35.732 21:37:58 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:35.732 21:37:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:35.732 21:37:58 -- common/autotest_common.sh@10 -- # set +x 00:05:35.732 21:37:58 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:35.994 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:36.567 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:36.567 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:36.567 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:36.567 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:36.829 21:37:59 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:36.829 21:37:59 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:36.829 21:37:59 -- common/autotest_common.sh@10 -- # set +x 00:05:36.829 21:37:59 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:36.829 21:37:59 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:36.829 21:37:59 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:36.829 21:37:59 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:36.829 21:37:59 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:36.829 21:37:59 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:36.830 21:37:59 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:36.830 21:37:59 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:36.830 21:37:59 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:36.830 21:37:59 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:36.830 21:37:59 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:36.830 21:37:59 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:36.830 21:37:59 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:36.830 21:37:59 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:36.830 21:37:59 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:36.830 21:37:59 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:36.830 21:37:59 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:36.830 21:37:59 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:36.830 21:37:59 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:36.830 21:37:59 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:36.830 21:37:59 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:36.830 21:37:59 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:36.830 21:37:59 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:36.830 21:37:59 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:36.830 21:37:59 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:36.830 21:37:59 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:36.830 21:37:59 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:36.830 21:37:59 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:36.830 21:37:59 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:36.830 21:37:59 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:36.830 21:37:59 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:36.830 21:37:59 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:36.830 21:37:59 -- common/autotest_common.sh@1572 -- # return 0 00:05:36.830 21:37:59 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:36.830 21:37:59 -- common/autotest_common.sh@1580 -- # return 0 00:05:36.830 21:37:59 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:36.830 21:37:59 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:36.830 21:37:59 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:36.830 21:37:59 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:36.830 21:37:59 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:36.830 21:37:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:36.830 21:37:59 -- common/autotest_common.sh@10 -- # set +x 00:05:36.830 21:37:59 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:36.830 21:37:59 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:36.830 21:37:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.830 21:37:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.830 21:37:59 -- common/autotest_common.sh@10 -- # set +x 00:05:36.830 ************************************ 00:05:36.830 START TEST env 00:05:36.830 ************************************ 00:05:36.830 21:37:59 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:37.090 * Looking for test storage... 00:05:37.090 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:37.090 21:38:00 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:37.091 21:38:00 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:37.091 21:38:00 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:37.091 21:38:00 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:37.091 21:38:00 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.091 21:38:00 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.091 21:38:00 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.091 21:38:00 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.091 21:38:00 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.091 21:38:00 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.091 21:38:00 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.091 21:38:00 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.091 21:38:00 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.091 21:38:00 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.091 21:38:00 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.091 21:38:00 env -- scripts/common.sh@344 -- # case "$op" in 00:05:37.091 21:38:00 env -- scripts/common.sh@345 -- # : 1 00:05:37.091 21:38:00 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.091 21:38:00 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.091 21:38:00 env -- scripts/common.sh@365 -- # decimal 1 00:05:37.091 21:38:00 env -- scripts/common.sh@353 -- # local d=1 00:05:37.091 21:38:00 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.091 21:38:00 env -- scripts/common.sh@355 -- # echo 1 00:05:37.091 21:38:00 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.091 21:38:00 env -- scripts/common.sh@366 -- # decimal 2 00:05:37.091 21:38:00 env -- scripts/common.sh@353 -- # local d=2 00:05:37.091 21:38:00 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.091 21:38:00 env -- scripts/common.sh@355 -- # echo 2 00:05:37.091 21:38:00 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.091 21:38:00 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.091 21:38:00 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.091 21:38:00 env -- scripts/common.sh@368 -- # return 0 00:05:37.091 21:38:00 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.091 21:38:00 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:37.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.091 --rc genhtml_branch_coverage=1 00:05:37.091 --rc genhtml_function_coverage=1 00:05:37.091 --rc genhtml_legend=1 00:05:37.091 --rc geninfo_all_blocks=1 00:05:37.091 --rc geninfo_unexecuted_blocks=1 00:05:37.091 00:05:37.091 ' 00:05:37.091 21:38:00 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:37.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.091 --rc genhtml_branch_coverage=1 00:05:37.091 --rc genhtml_function_coverage=1 00:05:37.091 --rc genhtml_legend=1 00:05:37.091 --rc geninfo_all_blocks=1 00:05:37.091 --rc geninfo_unexecuted_blocks=1 00:05:37.091 00:05:37.091 ' 00:05:37.091 21:38:00 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:37.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.091 --rc genhtml_branch_coverage=1 00:05:37.091 --rc genhtml_function_coverage=1 00:05:37.091 --rc genhtml_legend=1 00:05:37.091 --rc geninfo_all_blocks=1 00:05:37.091 --rc geninfo_unexecuted_blocks=1 00:05:37.091 00:05:37.091 ' 00:05:37.091 21:38:00 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:37.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.091 --rc genhtml_branch_coverage=1 00:05:37.091 --rc genhtml_function_coverage=1 00:05:37.091 --rc genhtml_legend=1 00:05:37.091 --rc geninfo_all_blocks=1 00:05:37.091 --rc geninfo_unexecuted_blocks=1 00:05:37.091 00:05:37.091 ' 00:05:37.091 21:38:00 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:37.091 21:38:00 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.091 21:38:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.091 21:38:00 env -- common/autotest_common.sh@10 -- # set +x 00:05:37.091 ************************************ 00:05:37.091 START TEST env_memory 00:05:37.091 ************************************ 00:05:37.091 21:38:00 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:37.091 00:05:37.091 00:05:37.091 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.091 http://cunit.sourceforge.net/ 00:05:37.091 00:05:37.091 00:05:37.091 Suite: memory 00:05:37.091 Test: alloc and free memory map ...[2024-11-27 21:38:00.145109] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:37.091 passed 00:05:37.091 Test: mem map translation ...[2024-11-27 21:38:00.184180] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:37.091 [2024-11-27 21:38:00.184316] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:37.091 [2024-11-27 21:38:00.184449] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:37.091 [2024-11-27 21:38:00.184486] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:37.352 passed 00:05:37.352 Test: mem map registration ...[2024-11-27 21:38:00.253152] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:37.352 [2024-11-27 21:38:00.253292] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:37.352 passed 00:05:37.352 Test: mem map adjacent registrations ...passed 00:05:37.352 00:05:37.352 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.352 suites 1 1 n/a 0 0 00:05:37.352 tests 4 4 4 0 0 00:05:37.352 asserts 152 152 152 0 n/a 00:05:37.352 00:05:37.352 Elapsed time = 0.234 seconds 00:05:37.352 00:05:37.352 real 0m0.268s 00:05:37.352 user 0m0.237s 00:05:37.352 sys 0m0.021s 00:05:37.352 21:38:00 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.352 21:38:00 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:37.352 ************************************ 00:05:37.352 END TEST env_memory 00:05:37.352 ************************************ 00:05:37.352 21:38:00 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:37.352 21:38:00 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.352 21:38:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.352 21:38:00 env -- common/autotest_common.sh@10 -- # set +x 00:05:37.352 ************************************ 00:05:37.352 START TEST env_vtophys 00:05:37.352 ************************************ 00:05:37.352 21:38:00 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:37.352 EAL: lib.eal log level changed from notice to debug 00:05:37.352 EAL: Detected lcore 0 as core 0 on socket 0 00:05:37.352 EAL: Detected lcore 1 as core 0 on socket 0 00:05:37.352 EAL: Detected lcore 2 as core 0 on socket 0 00:05:37.352 EAL: Detected lcore 3 as core 0 on socket 0 00:05:37.352 EAL: Detected lcore 4 as core 0 on socket 0 00:05:37.352 EAL: Detected lcore 5 as core 0 on socket 0 00:05:37.352 EAL: Detected lcore 6 as core 0 on socket 0 00:05:37.352 EAL: Detected lcore 7 as core 0 on socket 0 00:05:37.352 EAL: Detected lcore 8 as core 0 on socket 0 00:05:37.352 EAL: Detected lcore 9 as core 0 on socket 0 00:05:37.352 EAL: Maximum logical cores by configuration: 128 00:05:37.352 EAL: Detected CPU lcores: 10 00:05:37.352 EAL: Detected NUMA nodes: 1 00:05:37.352 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:37.352 EAL: Detected shared linkage of DPDK 00:05:37.352 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:37.352 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:37.352 EAL: Registered [vdev] bus. 00:05:37.352 EAL: bus.vdev log level changed from disabled to notice 00:05:37.352 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:37.352 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:37.353 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:37.353 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:37.353 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:37.353 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:37.353 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:37.353 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:37.353 EAL: No shared files mode enabled, IPC will be disabled 00:05:37.353 EAL: No shared files mode enabled, IPC is disabled 00:05:37.353 EAL: Selected IOVA mode 'PA' 00:05:37.353 EAL: Probing VFIO support... 00:05:37.353 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:37.353 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:37.353 EAL: Ask a virtual area of 0x2e000 bytes 00:05:37.353 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:37.353 EAL: Setting up physically contiguous memory... 00:05:37.353 EAL: Setting maximum number of open files to 524288 00:05:37.353 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:37.353 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:37.353 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.353 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:37.353 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.353 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.353 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:37.353 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:37.353 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.353 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:37.353 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.353 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.353 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:37.353 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:37.353 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.353 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:37.353 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.353 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.353 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:37.353 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:37.353 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.353 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:37.353 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.353 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.353 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:37.353 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:37.353 EAL: Hugepages will be freed exactly as allocated. 00:05:37.353 EAL: No shared files mode enabled, IPC is disabled 00:05:37.353 EAL: No shared files mode enabled, IPC is disabled 00:05:37.613 EAL: TSC frequency is ~2600000 KHz 00:05:37.613 EAL: Main lcore 0 is ready (tid=7f6e96222a40;cpuset=[0]) 00:05:37.613 EAL: Trying to obtain current memory policy. 00:05:37.613 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.613 EAL: Restoring previous memory policy: 0 00:05:37.613 EAL: request: mp_malloc_sync 00:05:37.613 EAL: No shared files mode enabled, IPC is disabled 00:05:37.613 EAL: Heap on socket 0 was expanded by 2MB 00:05:37.613 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:37.613 EAL: No shared files mode enabled, IPC is disabled 00:05:37.613 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:37.613 EAL: Mem event callback 'spdk:(nil)' registered 00:05:37.613 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:37.613 00:05:37.613 00:05:37.613 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.613 http://cunit.sourceforge.net/ 00:05:37.613 00:05:37.613 00:05:37.613 Suite: components_suite 00:05:37.873 Test: vtophys_malloc_test ...passed 00:05:37.873 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:37.873 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.873 EAL: Restoring previous memory policy: 4 00:05:37.873 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.873 EAL: request: mp_malloc_sync 00:05:37.874 EAL: No shared files mode enabled, IPC is disabled 00:05:37.874 EAL: Heap on socket 0 was expanded by 4MB 00:05:37.874 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.874 EAL: request: mp_malloc_sync 00:05:37.874 EAL: No shared files mode enabled, IPC is disabled 00:05:37.874 EAL: Heap on socket 0 was shrunk by 4MB 00:05:37.874 EAL: Trying to obtain current memory policy. 00:05:37.874 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.874 EAL: Restoring previous memory policy: 4 00:05:37.874 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.874 EAL: request: mp_malloc_sync 00:05:37.874 EAL: No shared files mode enabled, IPC is disabled 00:05:37.874 EAL: Heap on socket 0 was expanded by 6MB 00:05:37.874 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.874 EAL: request: mp_malloc_sync 00:05:37.874 EAL: No shared files mode enabled, IPC is disabled 00:05:37.874 EAL: Heap on socket 0 was shrunk by 6MB 00:05:37.874 EAL: Trying to obtain current memory policy. 00:05:37.874 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.874 EAL: Restoring previous memory policy: 4 00:05:37.874 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.874 EAL: request: mp_malloc_sync 00:05:37.874 EAL: No shared files mode enabled, IPC is disabled 00:05:37.874 EAL: Heap on socket 0 was expanded by 10MB 00:05:37.874 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.874 EAL: request: mp_malloc_sync 00:05:37.874 EAL: No shared files mode enabled, IPC is disabled 00:05:37.874 EAL: Heap on socket 0 was shrunk by 10MB 00:05:37.874 EAL: Trying to obtain current memory policy. 00:05:37.874 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.874 EAL: Restoring previous memory policy: 4 00:05:37.874 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.874 EAL: request: mp_malloc_sync 00:05:37.874 EAL: No shared files mode enabled, IPC is disabled 00:05:37.874 EAL: Heap on socket 0 was expanded by 18MB 00:05:37.874 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.874 EAL: request: mp_malloc_sync 00:05:37.874 EAL: No shared files mode enabled, IPC is disabled 00:05:37.874 EAL: Heap on socket 0 was shrunk by 18MB 00:05:37.874 EAL: Trying to obtain current memory policy. 00:05:37.874 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.874 EAL: Restoring previous memory policy: 4 00:05:37.874 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.874 EAL: request: mp_malloc_sync 00:05:37.874 EAL: No shared files mode enabled, IPC is disabled 00:05:37.874 EAL: Heap on socket 0 was expanded by 34MB 00:05:37.874 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.874 EAL: request: mp_malloc_sync 00:05:37.874 EAL: No shared files mode enabled, IPC is disabled 00:05:37.874 EAL: Heap on socket 0 was shrunk by 34MB 00:05:37.874 EAL: Trying to obtain current memory policy. 00:05:37.874 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.874 EAL: Restoring previous memory policy: 4 00:05:37.874 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.874 EAL: request: mp_malloc_sync 00:05:37.874 EAL: No shared files mode enabled, IPC is disabled 00:05:37.874 EAL: Heap on socket 0 was expanded by 66MB 00:05:37.874 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.874 EAL: request: mp_malloc_sync 00:05:37.874 EAL: No shared files mode enabled, IPC is disabled 00:05:37.874 EAL: Heap on socket 0 was shrunk by 66MB 00:05:37.874 EAL: Trying to obtain current memory policy. 00:05:37.874 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.135 EAL: Restoring previous memory policy: 4 00:05:38.135 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.135 EAL: request: mp_malloc_sync 00:05:38.135 EAL: No shared files mode enabled, IPC is disabled 00:05:38.135 EAL: Heap on socket 0 was expanded by 130MB 00:05:38.135 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.135 EAL: request: mp_malloc_sync 00:05:38.135 EAL: No shared files mode enabled, IPC is disabled 00:05:38.135 EAL: Heap on socket 0 was shrunk by 130MB 00:05:38.135 EAL: Trying to obtain current memory policy. 00:05:38.135 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.135 EAL: Restoring previous memory policy: 4 00:05:38.135 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.135 EAL: request: mp_malloc_sync 00:05:38.135 EAL: No shared files mode enabled, IPC is disabled 00:05:38.135 EAL: Heap on socket 0 was expanded by 258MB 00:05:38.135 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.135 EAL: request: mp_malloc_sync 00:05:38.135 EAL: No shared files mode enabled, IPC is disabled 00:05:38.135 EAL: Heap on socket 0 was shrunk by 258MB 00:05:38.135 EAL: Trying to obtain current memory policy. 00:05:38.135 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.393 EAL: Restoring previous memory policy: 4 00:05:38.393 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.393 EAL: request: mp_malloc_sync 00:05:38.393 EAL: No shared files mode enabled, IPC is disabled 00:05:38.393 EAL: Heap on socket 0 was expanded by 514MB 00:05:38.393 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.393 EAL: request: mp_malloc_sync 00:05:38.393 EAL: No shared files mode enabled, IPC is disabled 00:05:38.393 EAL: Heap on socket 0 was shrunk by 514MB 00:05:38.393 EAL: Trying to obtain current memory policy. 00:05:38.393 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.654 EAL: Restoring previous memory policy: 4 00:05:38.654 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.654 EAL: request: mp_malloc_sync 00:05:38.654 EAL: No shared files mode enabled, IPC is disabled 00:05:38.654 EAL: Heap on socket 0 was expanded by 1026MB 00:05:38.654 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.654 passed 00:05:38.654 00:05:38.654 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.654 suites 1 1 n/a 0 0 00:05:38.654 tests 2 2 2 0 0 00:05:38.654 asserts 5505 5505 5505 0 n/a 00:05:38.654 00:05:38.654 Elapsed time = 1.122 seconds 00:05:38.654 EAL: request: mp_malloc_sync 00:05:38.654 EAL: No shared files mode enabled, IPC is disabled 00:05:38.654 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:38.654 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.654 EAL: request: mp_malloc_sync 00:05:38.654 EAL: No shared files mode enabled, IPC is disabled 00:05:38.654 EAL: Heap on socket 0 was shrunk by 2MB 00:05:38.654 EAL: No shared files mode enabled, IPC is disabled 00:05:38.654 EAL: No shared files mode enabled, IPC is disabled 00:05:38.654 EAL: No shared files mode enabled, IPC is disabled 00:05:38.916 00:05:38.916 real 0m1.353s 00:05:38.916 user 0m0.547s 00:05:38.916 sys 0m0.667s 00:05:38.916 21:38:01 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.916 21:38:01 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:38.916 ************************************ 00:05:38.916 END TEST env_vtophys 00:05:38.916 ************************************ 00:05:38.916 21:38:01 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:38.916 21:38:01 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.916 21:38:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.916 21:38:01 env -- common/autotest_common.sh@10 -- # set +x 00:05:38.916 ************************************ 00:05:38.916 START TEST env_pci 00:05:38.916 ************************************ 00:05:38.916 21:38:01 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:38.916 00:05:38.916 00:05:38.916 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.916 http://cunit.sourceforge.net/ 00:05:38.916 00:05:38.916 00:05:38.916 Suite: pci 00:05:38.916 Test: pci_hook ...[2024-11-27 21:38:01.851682] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 68926 has claimed it 00:05:38.916 EAL: Cannot find device (10000:00:01.0) 00:05:38.916 passed 00:05:38.916 00:05:38.916 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.916 suites 1 1 n/a 0 0 00:05:38.916 tests 1 1 1 0 0 00:05:38.916 asserts 25 25 25 0 n/a 00:05:38.916 00:05:38.916 Elapsed time = 0.004 seconds 00:05:38.916 EAL: Failed to attach device on primary process 00:05:38.916 00:05:38.916 real 0m0.048s 00:05:38.916 user 0m0.023s 00:05:38.916 sys 0m0.024s 00:05:38.916 21:38:01 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.916 21:38:01 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:38.916 ************************************ 00:05:38.916 END TEST env_pci 00:05:38.916 ************************************ 00:05:38.916 21:38:01 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:38.916 21:38:01 env -- env/env.sh@15 -- # uname 00:05:38.916 21:38:01 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:38.916 21:38:01 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:38.916 21:38:01 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:38.916 21:38:01 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:38.916 21:38:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.916 21:38:01 env -- common/autotest_common.sh@10 -- # set +x 00:05:38.916 ************************************ 00:05:38.916 START TEST env_dpdk_post_init 00:05:38.916 ************************************ 00:05:38.916 21:38:01 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:38.916 EAL: Detected CPU lcores: 10 00:05:38.916 EAL: Detected NUMA nodes: 1 00:05:38.916 EAL: Detected shared linkage of DPDK 00:05:38.916 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:38.916 EAL: Selected IOVA mode 'PA' 00:05:39.177 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:39.177 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:39.177 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:39.177 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:39.177 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:39.177 Starting DPDK initialization... 00:05:39.177 Starting SPDK post initialization... 00:05:39.177 SPDK NVMe probe 00:05:39.177 Attaching to 0000:00:10.0 00:05:39.177 Attaching to 0000:00:11.0 00:05:39.177 Attaching to 0000:00:12.0 00:05:39.177 Attaching to 0000:00:13.0 00:05:39.177 Attached to 0000:00:13.0 00:05:39.177 Attached to 0000:00:10.0 00:05:39.177 Attached to 0000:00:11.0 00:05:39.177 Attached to 0000:00:12.0 00:05:39.177 Cleaning up... 00:05:39.177 00:05:39.177 real 0m0.210s 00:05:39.177 user 0m0.064s 00:05:39.177 sys 0m0.049s 00:05:39.177 21:38:02 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.177 ************************************ 00:05:39.177 21:38:02 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:39.177 END TEST env_dpdk_post_init 00:05:39.177 ************************************ 00:05:39.177 21:38:02 env -- env/env.sh@26 -- # uname 00:05:39.177 21:38:02 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:39.177 21:38:02 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:39.177 21:38:02 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.177 21:38:02 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.177 21:38:02 env -- common/autotest_common.sh@10 -- # set +x 00:05:39.177 ************************************ 00:05:39.177 START TEST env_mem_callbacks 00:05:39.177 ************************************ 00:05:39.177 21:38:02 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:39.177 EAL: Detected CPU lcores: 10 00:05:39.177 EAL: Detected NUMA nodes: 1 00:05:39.177 EAL: Detected shared linkage of DPDK 00:05:39.177 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:39.177 EAL: Selected IOVA mode 'PA' 00:05:39.439 00:05:39.439 00:05:39.439 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.439 http://cunit.sourceforge.net/ 00:05:39.439 00:05:39.439 00:05:39.439 Suite: memory 00:05:39.439 Test: test ... 00:05:39.439 register 0x200000200000 2097152 00:05:39.439 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:39.439 malloc 3145728 00:05:39.439 register 0x200000400000 4194304 00:05:39.439 buf 0x200000500000 len 3145728 PASSED 00:05:39.439 malloc 64 00:05:39.439 buf 0x2000004fff40 len 64 PASSED 00:05:39.439 malloc 4194304 00:05:39.439 register 0x200000800000 6291456 00:05:39.439 buf 0x200000a00000 len 4194304 PASSED 00:05:39.439 free 0x200000500000 3145728 00:05:39.439 free 0x2000004fff40 64 00:05:39.439 unregister 0x200000400000 4194304 PASSED 00:05:39.439 free 0x200000a00000 4194304 00:05:39.439 unregister 0x200000800000 6291456 PASSED 00:05:39.439 malloc 8388608 00:05:39.439 register 0x200000400000 10485760 00:05:39.439 buf 0x200000600000 len 8388608 PASSED 00:05:39.439 free 0x200000600000 8388608 00:05:39.439 unregister 0x200000400000 10485760 PASSED 00:05:39.439 passed 00:05:39.439 00:05:39.439 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.439 suites 1 1 n/a 0 0 00:05:39.439 tests 1 1 1 0 0 00:05:39.439 asserts 15 15 15 0 n/a 00:05:39.439 00:05:39.439 Elapsed time = 0.008 seconds 00:05:39.439 ************************************ 00:05:39.439 END TEST env_mem_callbacks 00:05:39.439 ************************************ 00:05:39.439 00:05:39.439 real 0m0.151s 00:05:39.439 user 0m0.020s 00:05:39.439 sys 0m0.029s 00:05:39.439 21:38:02 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.439 21:38:02 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:39.439 00:05:39.439 real 0m2.479s 00:05:39.439 user 0m1.057s 00:05:39.439 sys 0m0.990s 00:05:39.439 21:38:02 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.439 21:38:02 env -- common/autotest_common.sh@10 -- # set +x 00:05:39.439 ************************************ 00:05:39.439 END TEST env 00:05:39.439 ************************************ 00:05:39.439 21:38:02 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:39.439 21:38:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.439 21:38:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.439 21:38:02 -- common/autotest_common.sh@10 -- # set +x 00:05:39.439 ************************************ 00:05:39.439 START TEST rpc 00:05:39.439 ************************************ 00:05:39.439 21:38:02 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:39.439 * Looking for test storage... 00:05:39.439 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:39.439 21:38:02 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:39.439 21:38:02 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:39.439 21:38:02 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:39.700 21:38:02 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:39.700 21:38:02 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.700 21:38:02 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.700 21:38:02 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.700 21:38:02 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.700 21:38:02 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.700 21:38:02 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.700 21:38:02 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.700 21:38:02 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.700 21:38:02 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.700 21:38:02 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.700 21:38:02 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.700 21:38:02 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:39.700 21:38:02 rpc -- scripts/common.sh@345 -- # : 1 00:05:39.700 21:38:02 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.700 21:38:02 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.700 21:38:02 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:39.700 21:38:02 rpc -- scripts/common.sh@353 -- # local d=1 00:05:39.700 21:38:02 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.700 21:38:02 rpc -- scripts/common.sh@355 -- # echo 1 00:05:39.701 21:38:02 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.701 21:38:02 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:39.701 21:38:02 rpc -- scripts/common.sh@353 -- # local d=2 00:05:39.701 21:38:02 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.701 21:38:02 rpc -- scripts/common.sh@355 -- # echo 2 00:05:39.701 21:38:02 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.701 21:38:02 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.701 21:38:02 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.701 21:38:02 rpc -- scripts/common.sh@368 -- # return 0 00:05:39.701 21:38:02 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.701 21:38:02 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:39.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.701 --rc genhtml_branch_coverage=1 00:05:39.701 --rc genhtml_function_coverage=1 00:05:39.701 --rc genhtml_legend=1 00:05:39.701 --rc geninfo_all_blocks=1 00:05:39.701 --rc geninfo_unexecuted_blocks=1 00:05:39.701 00:05:39.701 ' 00:05:39.701 21:38:02 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:39.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.701 --rc genhtml_branch_coverage=1 00:05:39.701 --rc genhtml_function_coverage=1 00:05:39.701 --rc genhtml_legend=1 00:05:39.701 --rc geninfo_all_blocks=1 00:05:39.701 --rc geninfo_unexecuted_blocks=1 00:05:39.701 00:05:39.701 ' 00:05:39.701 21:38:02 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:39.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.701 --rc genhtml_branch_coverage=1 00:05:39.701 --rc genhtml_function_coverage=1 00:05:39.701 --rc genhtml_legend=1 00:05:39.701 --rc geninfo_all_blocks=1 00:05:39.701 --rc geninfo_unexecuted_blocks=1 00:05:39.701 00:05:39.701 ' 00:05:39.701 21:38:02 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:39.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.701 --rc genhtml_branch_coverage=1 00:05:39.701 --rc genhtml_function_coverage=1 00:05:39.701 --rc genhtml_legend=1 00:05:39.701 --rc geninfo_all_blocks=1 00:05:39.701 --rc geninfo_unexecuted_blocks=1 00:05:39.701 00:05:39.701 ' 00:05:39.701 21:38:02 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69053 00:05:39.701 21:38:02 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:39.701 21:38:02 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69053 00:05:39.701 21:38:02 rpc -- common/autotest_common.sh@835 -- # '[' -z 69053 ']' 00:05:39.701 21:38:02 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.701 21:38:02 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.701 21:38:02 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.701 21:38:02 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.701 21:38:02 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:39.701 21:38:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.701 [2024-11-27 21:38:02.679296] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:39.701 [2024-11-27 21:38:02.679716] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69053 ] 00:05:39.961 [2024-11-27 21:38:02.824732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.961 [2024-11-27 21:38:02.843148] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:39.961 [2024-11-27 21:38:02.843190] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69053' to capture a snapshot of events at runtime. 00:05:39.961 [2024-11-27 21:38:02.843205] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:39.961 [2024-11-27 21:38:02.843212] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:39.961 [2024-11-27 21:38:02.843222] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69053 for offline analysis/debug. 00:05:39.961 [2024-11-27 21:38:02.843532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.531 21:38:03 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.531 21:38:03 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:40.531 21:38:03 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:40.531 21:38:03 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:40.531 21:38:03 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:40.531 21:38:03 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:40.531 21:38:03 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.531 21:38:03 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.531 21:38:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.531 ************************************ 00:05:40.531 START TEST rpc_integrity 00:05:40.531 ************************************ 00:05:40.531 21:38:03 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:40.531 21:38:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:40.531 21:38:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.531 21:38:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.531 21:38:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.531 21:38:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:40.531 21:38:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:40.531 21:38:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:40.531 21:38:03 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:40.531 21:38:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.531 21:38:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.531 21:38:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.531 21:38:03 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:40.531 21:38:03 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:40.531 21:38:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.531 21:38:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.531 21:38:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.531 21:38:03 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:40.531 { 00:05:40.531 "name": "Malloc0", 00:05:40.531 "aliases": [ 00:05:40.531 "5b949a5f-6285-4c45-8e05-2df3d8252ee4" 00:05:40.531 ], 00:05:40.531 "product_name": "Malloc disk", 00:05:40.531 "block_size": 512, 00:05:40.531 "num_blocks": 16384, 00:05:40.531 "uuid": "5b949a5f-6285-4c45-8e05-2df3d8252ee4", 00:05:40.531 "assigned_rate_limits": { 00:05:40.531 "rw_ios_per_sec": 0, 00:05:40.531 "rw_mbytes_per_sec": 0, 00:05:40.531 "r_mbytes_per_sec": 0, 00:05:40.531 "w_mbytes_per_sec": 0 00:05:40.531 }, 00:05:40.531 "claimed": false, 00:05:40.531 "zoned": false, 00:05:40.531 "supported_io_types": { 00:05:40.531 "read": true, 00:05:40.531 "write": true, 00:05:40.531 "unmap": true, 00:05:40.531 "flush": true, 00:05:40.531 "reset": true, 00:05:40.531 "nvme_admin": false, 00:05:40.531 "nvme_io": false, 00:05:40.531 "nvme_io_md": false, 00:05:40.531 "write_zeroes": true, 00:05:40.531 "zcopy": true, 00:05:40.531 "get_zone_info": false, 00:05:40.531 "zone_management": false, 00:05:40.531 "zone_append": false, 00:05:40.531 "compare": false, 00:05:40.531 "compare_and_write": false, 00:05:40.531 "abort": true, 00:05:40.531 "seek_hole": false, 00:05:40.531 "seek_data": false, 00:05:40.531 "copy": true, 00:05:40.531 "nvme_iov_md": false 00:05:40.531 }, 00:05:40.531 "memory_domains": [ 00:05:40.531 { 00:05:40.531 "dma_device_id": "system", 00:05:40.531 "dma_device_type": 1 00:05:40.531 }, 00:05:40.531 { 00:05:40.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.531 "dma_device_type": 2 00:05:40.531 } 00:05:40.531 ], 00:05:40.531 "driver_specific": {} 00:05:40.531 } 00:05:40.531 ]' 00:05:40.531 21:38:03 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:40.531 21:38:03 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:40.531 21:38:03 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:40.531 21:38:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.531 21:38:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.531 [2024-11-27 21:38:03.634625] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:40.531 [2024-11-27 21:38:03.634678] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:40.531 [2024-11-27 21:38:03.634703] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:40.531 [2024-11-27 21:38:03.634712] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:40.531 [2024-11-27 21:38:03.636938] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:40.531 [2024-11-27 21:38:03.636974] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:40.531 Passthru0 00:05:40.531 21:38:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.531 21:38:03 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:40.531 21:38:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.531 21:38:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.792 21:38:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.792 21:38:03 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:40.792 { 00:05:40.792 "name": "Malloc0", 00:05:40.792 "aliases": [ 00:05:40.792 "5b949a5f-6285-4c45-8e05-2df3d8252ee4" 00:05:40.792 ], 00:05:40.792 "product_name": "Malloc disk", 00:05:40.792 "block_size": 512, 00:05:40.792 "num_blocks": 16384, 00:05:40.792 "uuid": "5b949a5f-6285-4c45-8e05-2df3d8252ee4", 00:05:40.792 "assigned_rate_limits": { 00:05:40.792 "rw_ios_per_sec": 0, 00:05:40.792 "rw_mbytes_per_sec": 0, 00:05:40.792 "r_mbytes_per_sec": 0, 00:05:40.792 "w_mbytes_per_sec": 0 00:05:40.792 }, 00:05:40.792 "claimed": true, 00:05:40.792 "claim_type": "exclusive_write", 00:05:40.792 "zoned": false, 00:05:40.792 "supported_io_types": { 00:05:40.792 "read": true, 00:05:40.792 "write": true, 00:05:40.792 "unmap": true, 00:05:40.792 "flush": true, 00:05:40.792 "reset": true, 00:05:40.792 "nvme_admin": false, 00:05:40.792 "nvme_io": false, 00:05:40.792 "nvme_io_md": false, 00:05:40.792 "write_zeroes": true, 00:05:40.792 "zcopy": true, 00:05:40.792 "get_zone_info": false, 00:05:40.792 "zone_management": false, 00:05:40.792 "zone_append": false, 00:05:40.792 "compare": false, 00:05:40.792 "compare_and_write": false, 00:05:40.792 "abort": true, 00:05:40.792 "seek_hole": false, 00:05:40.792 "seek_data": false, 00:05:40.792 "copy": true, 00:05:40.792 "nvme_iov_md": false 00:05:40.792 }, 00:05:40.792 "memory_domains": [ 00:05:40.792 { 00:05:40.792 "dma_device_id": "system", 00:05:40.792 "dma_device_type": 1 00:05:40.792 }, 00:05:40.792 { 00:05:40.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.792 "dma_device_type": 2 00:05:40.792 } 00:05:40.792 ], 00:05:40.792 "driver_specific": {} 00:05:40.792 }, 00:05:40.792 { 00:05:40.792 "name": "Passthru0", 00:05:40.792 "aliases": [ 00:05:40.792 "822b193c-ebbf-5890-a2e3-cd8f268309e0" 00:05:40.792 ], 00:05:40.792 "product_name": "passthru", 00:05:40.792 "block_size": 512, 00:05:40.792 "num_blocks": 16384, 00:05:40.792 "uuid": "822b193c-ebbf-5890-a2e3-cd8f268309e0", 00:05:40.792 "assigned_rate_limits": { 00:05:40.792 "rw_ios_per_sec": 0, 00:05:40.792 "rw_mbytes_per_sec": 0, 00:05:40.792 "r_mbytes_per_sec": 0, 00:05:40.792 "w_mbytes_per_sec": 0 00:05:40.792 }, 00:05:40.792 "claimed": false, 00:05:40.792 "zoned": false, 00:05:40.792 "supported_io_types": { 00:05:40.792 "read": true, 00:05:40.792 "write": true, 00:05:40.792 "unmap": true, 00:05:40.792 "flush": true, 00:05:40.792 "reset": true, 00:05:40.792 "nvme_admin": false, 00:05:40.792 "nvme_io": false, 00:05:40.792 "nvme_io_md": false, 00:05:40.792 "write_zeroes": true, 00:05:40.792 "zcopy": true, 00:05:40.792 "get_zone_info": false, 00:05:40.792 "zone_management": false, 00:05:40.792 "zone_append": false, 00:05:40.792 "compare": false, 00:05:40.792 "compare_and_write": false, 00:05:40.792 "abort": true, 00:05:40.792 "seek_hole": false, 00:05:40.792 "seek_data": false, 00:05:40.792 "copy": true, 00:05:40.792 "nvme_iov_md": false 00:05:40.792 }, 00:05:40.792 "memory_domains": [ 00:05:40.792 { 00:05:40.792 "dma_device_id": "system", 00:05:40.792 "dma_device_type": 1 00:05:40.792 }, 00:05:40.792 { 00:05:40.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.792 "dma_device_type": 2 00:05:40.792 } 00:05:40.792 ], 00:05:40.792 "driver_specific": { 00:05:40.792 "passthru": { 00:05:40.792 "name": "Passthru0", 00:05:40.792 "base_bdev_name": "Malloc0" 00:05:40.792 } 00:05:40.792 } 00:05:40.792 } 00:05:40.792 ]' 00:05:40.792 21:38:03 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:40.792 21:38:03 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:40.792 21:38:03 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:40.792 21:38:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.792 21:38:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.792 21:38:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.792 21:38:03 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:40.792 21:38:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.792 21:38:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.792 21:38:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.792 21:38:03 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:40.792 21:38:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.792 21:38:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.792 21:38:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.792 21:38:03 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:40.792 21:38:03 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:40.792 21:38:03 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:40.792 00:05:40.792 real 0m0.231s 00:05:40.792 user 0m0.127s 00:05:40.792 sys 0m0.034s 00:05:40.792 ************************************ 00:05:40.792 END TEST rpc_integrity 00:05:40.792 21:38:03 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.792 21:38:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:40.792 ************************************ 00:05:40.793 21:38:03 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:40.793 21:38:03 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.793 21:38:03 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.793 21:38:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.793 ************************************ 00:05:40.793 START TEST rpc_plugins 00:05:40.793 ************************************ 00:05:40.793 21:38:03 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:40.793 21:38:03 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:40.793 21:38:03 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.793 21:38:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:40.793 21:38:03 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.793 21:38:03 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:40.793 21:38:03 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:40.793 21:38:03 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.793 21:38:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:40.793 21:38:03 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.793 21:38:03 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:40.793 { 00:05:40.793 "name": "Malloc1", 00:05:40.793 "aliases": [ 00:05:40.793 "bc2ed228-3bdd-4333-adb9-b086be686356" 00:05:40.793 ], 00:05:40.793 "product_name": "Malloc disk", 00:05:40.793 "block_size": 4096, 00:05:40.793 "num_blocks": 256, 00:05:40.793 "uuid": "bc2ed228-3bdd-4333-adb9-b086be686356", 00:05:40.793 "assigned_rate_limits": { 00:05:40.793 "rw_ios_per_sec": 0, 00:05:40.793 "rw_mbytes_per_sec": 0, 00:05:40.793 "r_mbytes_per_sec": 0, 00:05:40.793 "w_mbytes_per_sec": 0 00:05:40.793 }, 00:05:40.793 "claimed": false, 00:05:40.793 "zoned": false, 00:05:40.793 "supported_io_types": { 00:05:40.793 "read": true, 00:05:40.793 "write": true, 00:05:40.793 "unmap": true, 00:05:40.793 "flush": true, 00:05:40.793 "reset": true, 00:05:40.793 "nvme_admin": false, 00:05:40.793 "nvme_io": false, 00:05:40.793 "nvme_io_md": false, 00:05:40.793 "write_zeroes": true, 00:05:40.793 "zcopy": true, 00:05:40.793 "get_zone_info": false, 00:05:40.793 "zone_management": false, 00:05:40.793 "zone_append": false, 00:05:40.793 "compare": false, 00:05:40.793 "compare_and_write": false, 00:05:40.793 "abort": true, 00:05:40.793 "seek_hole": false, 00:05:40.793 "seek_data": false, 00:05:40.793 "copy": true, 00:05:40.793 "nvme_iov_md": false 00:05:40.793 }, 00:05:40.793 "memory_domains": [ 00:05:40.793 { 00:05:40.793 "dma_device_id": "system", 00:05:40.793 "dma_device_type": 1 00:05:40.793 }, 00:05:40.793 { 00:05:40.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.793 "dma_device_type": 2 00:05:40.793 } 00:05:40.793 ], 00:05:40.793 "driver_specific": {} 00:05:40.793 } 00:05:40.793 ]' 00:05:40.793 21:38:03 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:40.793 21:38:03 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:40.793 21:38:03 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:40.793 21:38:03 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.793 21:38:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:40.793 21:38:03 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.793 21:38:03 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:40.793 21:38:03 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.793 21:38:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:40.793 21:38:03 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.793 21:38:03 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:40.793 21:38:03 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:41.053 21:38:03 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:41.053 00:05:41.053 real 0m0.119s 00:05:41.053 user 0m0.062s 00:05:41.053 sys 0m0.020s 00:05:41.053 21:38:03 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.053 ************************************ 00:05:41.053 END TEST rpc_plugins 00:05:41.053 ************************************ 00:05:41.053 21:38:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:41.053 21:38:03 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:41.053 21:38:03 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.053 21:38:03 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.053 21:38:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.053 ************************************ 00:05:41.053 START TEST rpc_trace_cmd_test 00:05:41.053 ************************************ 00:05:41.053 21:38:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:41.053 21:38:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:41.053 21:38:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:41.053 21:38:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.053 21:38:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:41.053 21:38:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.053 21:38:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:41.053 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69053", 00:05:41.053 "tpoint_group_mask": "0x8", 00:05:41.053 "iscsi_conn": { 00:05:41.053 "mask": "0x2", 00:05:41.053 "tpoint_mask": "0x0" 00:05:41.053 }, 00:05:41.053 "scsi": { 00:05:41.053 "mask": "0x4", 00:05:41.053 "tpoint_mask": "0x0" 00:05:41.053 }, 00:05:41.053 "bdev": { 00:05:41.053 "mask": "0x8", 00:05:41.053 "tpoint_mask": "0xffffffffffffffff" 00:05:41.053 }, 00:05:41.053 "nvmf_rdma": { 00:05:41.053 "mask": "0x10", 00:05:41.053 "tpoint_mask": "0x0" 00:05:41.053 }, 00:05:41.053 "nvmf_tcp": { 00:05:41.053 "mask": "0x20", 00:05:41.053 "tpoint_mask": "0x0" 00:05:41.053 }, 00:05:41.053 "ftl": { 00:05:41.053 "mask": "0x40", 00:05:41.053 "tpoint_mask": "0x0" 00:05:41.053 }, 00:05:41.053 "blobfs": { 00:05:41.053 "mask": "0x80", 00:05:41.053 "tpoint_mask": "0x0" 00:05:41.053 }, 00:05:41.053 "dsa": { 00:05:41.053 "mask": "0x200", 00:05:41.053 "tpoint_mask": "0x0" 00:05:41.053 }, 00:05:41.053 "thread": { 00:05:41.053 "mask": "0x400", 00:05:41.053 "tpoint_mask": "0x0" 00:05:41.053 }, 00:05:41.053 "nvme_pcie": { 00:05:41.053 "mask": "0x800", 00:05:41.053 "tpoint_mask": "0x0" 00:05:41.053 }, 00:05:41.053 "iaa": { 00:05:41.053 "mask": "0x1000", 00:05:41.053 "tpoint_mask": "0x0" 00:05:41.053 }, 00:05:41.053 "nvme_tcp": { 00:05:41.053 "mask": "0x2000", 00:05:41.053 "tpoint_mask": "0x0" 00:05:41.053 }, 00:05:41.054 "bdev_nvme": { 00:05:41.054 "mask": "0x4000", 00:05:41.054 "tpoint_mask": "0x0" 00:05:41.054 }, 00:05:41.054 "sock": { 00:05:41.054 "mask": "0x8000", 00:05:41.054 "tpoint_mask": "0x0" 00:05:41.054 }, 00:05:41.054 "blob": { 00:05:41.054 "mask": "0x10000", 00:05:41.054 "tpoint_mask": "0x0" 00:05:41.054 }, 00:05:41.054 "bdev_raid": { 00:05:41.054 "mask": "0x20000", 00:05:41.054 "tpoint_mask": "0x0" 00:05:41.054 }, 00:05:41.054 "scheduler": { 00:05:41.054 "mask": "0x40000", 00:05:41.054 "tpoint_mask": "0x0" 00:05:41.054 } 00:05:41.054 }' 00:05:41.054 21:38:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:41.054 21:38:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:41.054 21:38:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:41.054 21:38:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:41.054 21:38:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:41.054 21:38:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:41.054 21:38:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:41.054 21:38:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:41.054 21:38:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:41.054 21:38:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:41.054 00:05:41.054 real 0m0.177s 00:05:41.054 user 0m0.141s 00:05:41.054 sys 0m0.024s 00:05:41.054 21:38:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.054 ************************************ 00:05:41.054 END TEST rpc_trace_cmd_test 00:05:41.054 ************************************ 00:05:41.054 21:38:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:41.314 21:38:04 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:41.314 21:38:04 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:41.314 21:38:04 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:41.314 21:38:04 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.314 21:38:04 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.314 21:38:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.314 ************************************ 00:05:41.314 START TEST rpc_daemon_integrity 00:05:41.314 ************************************ 00:05:41.314 21:38:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:41.314 21:38:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:41.314 21:38:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.315 21:38:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.315 21:38:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.315 21:38:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:41.315 21:38:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:41.315 21:38:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:41.315 21:38:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:41.315 21:38:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.315 21:38:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.315 21:38:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.315 21:38:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:41.315 21:38:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:41.315 21:38:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.315 21:38:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.315 21:38:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.315 21:38:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:41.315 { 00:05:41.315 "name": "Malloc2", 00:05:41.315 "aliases": [ 00:05:41.315 "6e86c351-346b-4eb9-bb03-a912278cea84" 00:05:41.315 ], 00:05:41.315 "product_name": "Malloc disk", 00:05:41.315 "block_size": 512, 00:05:41.315 "num_blocks": 16384, 00:05:41.315 "uuid": "6e86c351-346b-4eb9-bb03-a912278cea84", 00:05:41.315 "assigned_rate_limits": { 00:05:41.315 "rw_ios_per_sec": 0, 00:05:41.315 "rw_mbytes_per_sec": 0, 00:05:41.315 "r_mbytes_per_sec": 0, 00:05:41.315 "w_mbytes_per_sec": 0 00:05:41.315 }, 00:05:41.315 "claimed": false, 00:05:41.315 "zoned": false, 00:05:41.315 "supported_io_types": { 00:05:41.315 "read": true, 00:05:41.315 "write": true, 00:05:41.315 "unmap": true, 00:05:41.315 "flush": true, 00:05:41.315 "reset": true, 00:05:41.315 "nvme_admin": false, 00:05:41.315 "nvme_io": false, 00:05:41.315 "nvme_io_md": false, 00:05:41.315 "write_zeroes": true, 00:05:41.315 "zcopy": true, 00:05:41.315 "get_zone_info": false, 00:05:41.315 "zone_management": false, 00:05:41.315 "zone_append": false, 00:05:41.315 "compare": false, 00:05:41.315 "compare_and_write": false, 00:05:41.315 "abort": true, 00:05:41.315 "seek_hole": false, 00:05:41.315 "seek_data": false, 00:05:41.315 "copy": true, 00:05:41.315 "nvme_iov_md": false 00:05:41.315 }, 00:05:41.315 "memory_domains": [ 00:05:41.315 { 00:05:41.315 "dma_device_id": "system", 00:05:41.315 "dma_device_type": 1 00:05:41.315 }, 00:05:41.315 { 00:05:41.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.315 "dma_device_type": 2 00:05:41.315 } 00:05:41.315 ], 00:05:41.315 "driver_specific": {} 00:05:41.315 } 00:05:41.315 ]' 00:05:41.315 21:38:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:41.315 21:38:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:41.315 21:38:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:41.315 21:38:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.315 21:38:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.315 [2024-11-27 21:38:04.342916] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:41.315 [2024-11-27 21:38:04.342965] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:41.315 [2024-11-27 21:38:04.342984] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:41.315 [2024-11-27 21:38:04.342992] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:41.315 [2024-11-27 21:38:04.345123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:41.315 [2024-11-27 21:38:04.345158] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:41.315 Passthru0 00:05:41.315 21:38:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.315 21:38:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:41.315 21:38:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.315 21:38:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.315 21:38:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.315 21:38:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:41.315 { 00:05:41.315 "name": "Malloc2", 00:05:41.315 "aliases": [ 00:05:41.315 "6e86c351-346b-4eb9-bb03-a912278cea84" 00:05:41.315 ], 00:05:41.315 "product_name": "Malloc disk", 00:05:41.315 "block_size": 512, 00:05:41.315 "num_blocks": 16384, 00:05:41.315 "uuid": "6e86c351-346b-4eb9-bb03-a912278cea84", 00:05:41.315 "assigned_rate_limits": { 00:05:41.315 "rw_ios_per_sec": 0, 00:05:41.315 "rw_mbytes_per_sec": 0, 00:05:41.315 "r_mbytes_per_sec": 0, 00:05:41.315 "w_mbytes_per_sec": 0 00:05:41.315 }, 00:05:41.315 "claimed": true, 00:05:41.315 "claim_type": "exclusive_write", 00:05:41.315 "zoned": false, 00:05:41.315 "supported_io_types": { 00:05:41.315 "read": true, 00:05:41.315 "write": true, 00:05:41.315 "unmap": true, 00:05:41.315 "flush": true, 00:05:41.315 "reset": true, 00:05:41.315 "nvme_admin": false, 00:05:41.315 "nvme_io": false, 00:05:41.315 "nvme_io_md": false, 00:05:41.315 "write_zeroes": true, 00:05:41.315 "zcopy": true, 00:05:41.315 "get_zone_info": false, 00:05:41.315 "zone_management": false, 00:05:41.315 "zone_append": false, 00:05:41.315 "compare": false, 00:05:41.315 "compare_and_write": false, 00:05:41.315 "abort": true, 00:05:41.315 "seek_hole": false, 00:05:41.315 "seek_data": false, 00:05:41.315 "copy": true, 00:05:41.315 "nvme_iov_md": false 00:05:41.315 }, 00:05:41.315 "memory_domains": [ 00:05:41.315 { 00:05:41.315 "dma_device_id": "system", 00:05:41.315 "dma_device_type": 1 00:05:41.315 }, 00:05:41.315 { 00:05:41.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.315 "dma_device_type": 2 00:05:41.315 } 00:05:41.315 ], 00:05:41.315 "driver_specific": {} 00:05:41.315 }, 00:05:41.315 { 00:05:41.315 "name": "Passthru0", 00:05:41.315 "aliases": [ 00:05:41.315 "e9323def-bd5b-5a36-80b5-f20ee34c83f5" 00:05:41.315 ], 00:05:41.315 "product_name": "passthru", 00:05:41.315 "block_size": 512, 00:05:41.315 "num_blocks": 16384, 00:05:41.315 "uuid": "e9323def-bd5b-5a36-80b5-f20ee34c83f5", 00:05:41.315 "assigned_rate_limits": { 00:05:41.315 "rw_ios_per_sec": 0, 00:05:41.315 "rw_mbytes_per_sec": 0, 00:05:41.315 "r_mbytes_per_sec": 0, 00:05:41.315 "w_mbytes_per_sec": 0 00:05:41.315 }, 00:05:41.315 "claimed": false, 00:05:41.315 "zoned": false, 00:05:41.315 "supported_io_types": { 00:05:41.315 "read": true, 00:05:41.315 "write": true, 00:05:41.315 "unmap": true, 00:05:41.315 "flush": true, 00:05:41.315 "reset": true, 00:05:41.315 "nvme_admin": false, 00:05:41.315 "nvme_io": false, 00:05:41.315 "nvme_io_md": false, 00:05:41.315 "write_zeroes": true, 00:05:41.315 "zcopy": true, 00:05:41.315 "get_zone_info": false, 00:05:41.315 "zone_management": false, 00:05:41.315 "zone_append": false, 00:05:41.315 "compare": false, 00:05:41.315 "compare_and_write": false, 00:05:41.315 "abort": true, 00:05:41.315 "seek_hole": false, 00:05:41.315 "seek_data": false, 00:05:41.315 "copy": true, 00:05:41.315 "nvme_iov_md": false 00:05:41.315 }, 00:05:41.315 "memory_domains": [ 00:05:41.315 { 00:05:41.315 "dma_device_id": "system", 00:05:41.315 "dma_device_type": 1 00:05:41.315 }, 00:05:41.315 { 00:05:41.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.315 "dma_device_type": 2 00:05:41.315 } 00:05:41.315 ], 00:05:41.315 "driver_specific": { 00:05:41.315 "passthru": { 00:05:41.315 "name": "Passthru0", 00:05:41.315 "base_bdev_name": "Malloc2" 00:05:41.315 } 00:05:41.315 } 00:05:41.315 } 00:05:41.315 ]' 00:05:41.315 21:38:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:41.315 21:38:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:41.315 21:38:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:41.315 21:38:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.315 21:38:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.315 21:38:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.315 21:38:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:41.315 21:38:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.315 21:38:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.315 21:38:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.315 21:38:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:41.315 21:38:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.315 21:38:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.315 21:38:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.316 21:38:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:41.576 21:38:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:41.576 21:38:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:41.576 00:05:41.576 real 0m0.234s 00:05:41.576 user 0m0.130s 00:05:41.576 sys 0m0.034s 00:05:41.576 21:38:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.576 ************************************ 00:05:41.576 END TEST rpc_daemon_integrity 00:05:41.576 ************************************ 00:05:41.576 21:38:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:41.576 21:38:04 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:41.576 21:38:04 rpc -- rpc/rpc.sh@84 -- # killprocess 69053 00:05:41.576 21:38:04 rpc -- common/autotest_common.sh@954 -- # '[' -z 69053 ']' 00:05:41.576 21:38:04 rpc -- common/autotest_common.sh@958 -- # kill -0 69053 00:05:41.576 21:38:04 rpc -- common/autotest_common.sh@959 -- # uname 00:05:41.576 21:38:04 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:41.576 21:38:04 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69053 00:05:41.576 21:38:04 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:41.576 21:38:04 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:41.576 killing process with pid 69053 00:05:41.576 21:38:04 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69053' 00:05:41.576 21:38:04 rpc -- common/autotest_common.sh@973 -- # kill 69053 00:05:41.576 21:38:04 rpc -- common/autotest_common.sh@978 -- # wait 69053 00:05:41.836 00:05:41.836 real 0m2.324s 00:05:41.836 user 0m2.770s 00:05:41.836 sys 0m0.573s 00:05:41.836 21:38:04 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.836 ************************************ 00:05:41.836 21:38:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.836 END TEST rpc 00:05:41.836 ************************************ 00:05:41.836 21:38:04 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:41.836 21:38:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.836 21:38:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.836 21:38:04 -- common/autotest_common.sh@10 -- # set +x 00:05:41.836 ************************************ 00:05:41.836 START TEST skip_rpc 00:05:41.836 ************************************ 00:05:41.836 21:38:04 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:41.836 * Looking for test storage... 00:05:41.836 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:41.836 21:38:04 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:41.836 21:38:04 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:41.836 21:38:04 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:42.098 21:38:04 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:42.098 21:38:04 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.098 21:38:04 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.098 21:38:04 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.098 21:38:04 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.098 21:38:04 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.098 21:38:04 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.098 21:38:04 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.098 21:38:04 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.098 21:38:04 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.098 21:38:04 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.098 21:38:04 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.098 21:38:04 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:42.098 21:38:04 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:42.098 21:38:04 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.098 21:38:04 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.098 21:38:04 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:42.098 21:38:04 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:42.098 21:38:04 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.098 21:38:04 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:42.098 21:38:04 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.098 21:38:04 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:42.098 21:38:04 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:42.098 21:38:04 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.098 21:38:04 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:42.098 21:38:04 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.098 21:38:04 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.098 21:38:04 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.098 21:38:04 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:42.098 21:38:04 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.098 21:38:04 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:42.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.098 --rc genhtml_branch_coverage=1 00:05:42.098 --rc genhtml_function_coverage=1 00:05:42.098 --rc genhtml_legend=1 00:05:42.098 --rc geninfo_all_blocks=1 00:05:42.098 --rc geninfo_unexecuted_blocks=1 00:05:42.098 00:05:42.098 ' 00:05:42.098 21:38:04 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:42.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.098 --rc genhtml_branch_coverage=1 00:05:42.098 --rc genhtml_function_coverage=1 00:05:42.098 --rc genhtml_legend=1 00:05:42.098 --rc geninfo_all_blocks=1 00:05:42.098 --rc geninfo_unexecuted_blocks=1 00:05:42.098 00:05:42.098 ' 00:05:42.098 21:38:04 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:42.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.098 --rc genhtml_branch_coverage=1 00:05:42.098 --rc genhtml_function_coverage=1 00:05:42.098 --rc genhtml_legend=1 00:05:42.098 --rc geninfo_all_blocks=1 00:05:42.098 --rc geninfo_unexecuted_blocks=1 00:05:42.098 00:05:42.098 ' 00:05:42.098 21:38:04 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:42.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.098 --rc genhtml_branch_coverage=1 00:05:42.098 --rc genhtml_function_coverage=1 00:05:42.098 --rc genhtml_legend=1 00:05:42.098 --rc geninfo_all_blocks=1 00:05:42.098 --rc geninfo_unexecuted_blocks=1 00:05:42.098 00:05:42.098 ' 00:05:42.098 21:38:04 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:42.098 21:38:04 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:42.098 21:38:04 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:42.098 21:38:04 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.098 21:38:04 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.098 21:38:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.098 ************************************ 00:05:42.098 START TEST skip_rpc 00:05:42.098 ************************************ 00:05:42.098 21:38:05 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:42.098 21:38:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69249 00:05:42.098 21:38:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:42.098 21:38:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:42.098 21:38:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:42.098 [2024-11-27 21:38:05.075144] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:42.098 [2024-11-27 21:38:05.075390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69249 ] 00:05:42.359 [2024-11-27 21:38:05.222781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.359 [2024-11-27 21:38:05.241514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.638 21:38:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:47.638 21:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:47.638 21:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:47.638 21:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:47.638 21:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:47.638 21:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:47.638 21:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:47.638 21:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:47.638 21:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.638 21:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.639 21:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:47.639 21:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:47.639 21:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:47.639 21:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:47.639 21:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:47.639 21:38:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:47.639 21:38:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69249 00:05:47.639 21:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 69249 ']' 00:05:47.639 21:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 69249 00:05:47.639 21:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:47.639 21:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.639 21:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69249 00:05:47.639 killing process with pid 69249 00:05:47.639 21:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.639 21:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.639 21:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69249' 00:05:47.639 21:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 69249 00:05:47.639 21:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 69249 00:05:47.639 ************************************ 00:05:47.639 END TEST skip_rpc 00:05:47.639 ************************************ 00:05:47.639 00:05:47.639 real 0m5.247s 00:05:47.639 user 0m4.939s 00:05:47.639 sys 0m0.215s 00:05:47.639 21:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.639 21:38:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.639 21:38:10 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:47.639 21:38:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.639 21:38:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.639 21:38:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.639 ************************************ 00:05:47.639 START TEST skip_rpc_with_json 00:05:47.639 ************************************ 00:05:47.639 21:38:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:47.639 21:38:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:47.639 21:38:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69337 00:05:47.639 21:38:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.639 21:38:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69337 00:05:47.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.639 21:38:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 69337 ']' 00:05:47.639 21:38:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.639 21:38:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.639 21:38:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.639 21:38:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.639 21:38:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:47.639 21:38:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:47.639 [2024-11-27 21:38:10.367643] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:47.639 [2024-11-27 21:38:10.367758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69337 ] 00:05:47.639 [2024-11-27 21:38:10.509962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.639 [2024-11-27 21:38:10.526189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.206 21:38:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.206 21:38:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:48.206 21:38:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:48.206 21:38:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.206 21:38:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:48.206 [2024-11-27 21:38:11.197253] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:48.206 request: 00:05:48.206 { 00:05:48.206 "trtype": "tcp", 00:05:48.206 "method": "nvmf_get_transports", 00:05:48.206 "req_id": 1 00:05:48.206 } 00:05:48.206 Got JSON-RPC error response 00:05:48.206 response: 00:05:48.206 { 00:05:48.206 "code": -19, 00:05:48.206 "message": "No such device" 00:05:48.206 } 00:05:48.206 21:38:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:48.206 21:38:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:48.206 21:38:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.206 21:38:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:48.206 [2024-11-27 21:38:11.209359] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:48.206 21:38:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.206 21:38:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:48.206 21:38:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.206 21:38:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:48.465 21:38:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.465 21:38:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:48.465 { 00:05:48.465 "subsystems": [ 00:05:48.465 { 00:05:48.465 "subsystem": "fsdev", 00:05:48.465 "config": [ 00:05:48.465 { 00:05:48.465 "method": "fsdev_set_opts", 00:05:48.465 "params": { 00:05:48.465 "fsdev_io_pool_size": 65535, 00:05:48.465 "fsdev_io_cache_size": 256 00:05:48.465 } 00:05:48.465 } 00:05:48.465 ] 00:05:48.465 }, 00:05:48.465 { 00:05:48.465 "subsystem": "keyring", 00:05:48.465 "config": [] 00:05:48.465 }, 00:05:48.465 { 00:05:48.465 "subsystem": "iobuf", 00:05:48.465 "config": [ 00:05:48.465 { 00:05:48.465 "method": "iobuf_set_options", 00:05:48.465 "params": { 00:05:48.465 "small_pool_count": 8192, 00:05:48.465 "large_pool_count": 1024, 00:05:48.465 "small_bufsize": 8192, 00:05:48.465 "large_bufsize": 135168, 00:05:48.465 "enable_numa": false 00:05:48.465 } 00:05:48.465 } 00:05:48.465 ] 00:05:48.465 }, 00:05:48.465 { 00:05:48.465 "subsystem": "sock", 00:05:48.465 "config": [ 00:05:48.465 { 00:05:48.465 "method": "sock_set_default_impl", 00:05:48.465 "params": { 00:05:48.465 "impl_name": "posix" 00:05:48.465 } 00:05:48.465 }, 00:05:48.465 { 00:05:48.465 "method": "sock_impl_set_options", 00:05:48.466 "params": { 00:05:48.466 "impl_name": "ssl", 00:05:48.466 "recv_buf_size": 4096, 00:05:48.466 "send_buf_size": 4096, 00:05:48.466 "enable_recv_pipe": true, 00:05:48.466 "enable_quickack": false, 00:05:48.466 "enable_placement_id": 0, 00:05:48.466 "enable_zerocopy_send_server": true, 00:05:48.466 "enable_zerocopy_send_client": false, 00:05:48.466 "zerocopy_threshold": 0, 00:05:48.466 "tls_version": 0, 00:05:48.466 "enable_ktls": false 00:05:48.466 } 00:05:48.466 }, 00:05:48.466 { 00:05:48.466 "method": "sock_impl_set_options", 00:05:48.466 "params": { 00:05:48.466 "impl_name": "posix", 00:05:48.466 "recv_buf_size": 2097152, 00:05:48.466 "send_buf_size": 2097152, 00:05:48.466 "enable_recv_pipe": true, 00:05:48.466 "enable_quickack": false, 00:05:48.466 "enable_placement_id": 0, 00:05:48.466 "enable_zerocopy_send_server": true, 00:05:48.466 "enable_zerocopy_send_client": false, 00:05:48.466 "zerocopy_threshold": 0, 00:05:48.466 "tls_version": 0, 00:05:48.466 "enable_ktls": false 00:05:48.466 } 00:05:48.466 } 00:05:48.466 ] 00:05:48.466 }, 00:05:48.466 { 00:05:48.466 "subsystem": "vmd", 00:05:48.466 "config": [] 00:05:48.466 }, 00:05:48.466 { 00:05:48.466 "subsystem": "accel", 00:05:48.466 "config": [ 00:05:48.466 { 00:05:48.466 "method": "accel_set_options", 00:05:48.466 "params": { 00:05:48.466 "small_cache_size": 128, 00:05:48.466 "large_cache_size": 16, 00:05:48.466 "task_count": 2048, 00:05:48.466 "sequence_count": 2048, 00:05:48.466 "buf_count": 2048 00:05:48.466 } 00:05:48.466 } 00:05:48.466 ] 00:05:48.466 }, 00:05:48.466 { 00:05:48.466 "subsystem": "bdev", 00:05:48.466 "config": [ 00:05:48.466 { 00:05:48.466 "method": "bdev_set_options", 00:05:48.466 "params": { 00:05:48.466 "bdev_io_pool_size": 65535, 00:05:48.466 "bdev_io_cache_size": 256, 00:05:48.466 "bdev_auto_examine": true, 00:05:48.466 "iobuf_small_cache_size": 128, 00:05:48.466 "iobuf_large_cache_size": 16 00:05:48.466 } 00:05:48.466 }, 00:05:48.466 { 00:05:48.466 "method": "bdev_raid_set_options", 00:05:48.466 "params": { 00:05:48.466 "process_window_size_kb": 1024, 00:05:48.466 "process_max_bandwidth_mb_sec": 0 00:05:48.466 } 00:05:48.466 }, 00:05:48.466 { 00:05:48.466 "method": "bdev_iscsi_set_options", 00:05:48.466 "params": { 00:05:48.466 "timeout_sec": 30 00:05:48.466 } 00:05:48.466 }, 00:05:48.466 { 00:05:48.466 "method": "bdev_nvme_set_options", 00:05:48.466 "params": { 00:05:48.466 "action_on_timeout": "none", 00:05:48.466 "timeout_us": 0, 00:05:48.466 "timeout_admin_us": 0, 00:05:48.466 "keep_alive_timeout_ms": 10000, 00:05:48.466 "arbitration_burst": 0, 00:05:48.466 "low_priority_weight": 0, 00:05:48.466 "medium_priority_weight": 0, 00:05:48.466 "high_priority_weight": 0, 00:05:48.466 "nvme_adminq_poll_period_us": 10000, 00:05:48.466 "nvme_ioq_poll_period_us": 0, 00:05:48.466 "io_queue_requests": 0, 00:05:48.466 "delay_cmd_submit": true, 00:05:48.466 "transport_retry_count": 4, 00:05:48.466 "bdev_retry_count": 3, 00:05:48.466 "transport_ack_timeout": 0, 00:05:48.466 "ctrlr_loss_timeout_sec": 0, 00:05:48.466 "reconnect_delay_sec": 0, 00:05:48.466 "fast_io_fail_timeout_sec": 0, 00:05:48.466 "disable_auto_failback": false, 00:05:48.466 "generate_uuids": false, 00:05:48.466 "transport_tos": 0, 00:05:48.466 "nvme_error_stat": false, 00:05:48.466 "rdma_srq_size": 0, 00:05:48.466 "io_path_stat": false, 00:05:48.466 "allow_accel_sequence": false, 00:05:48.466 "rdma_max_cq_size": 0, 00:05:48.466 "rdma_cm_event_timeout_ms": 0, 00:05:48.466 "dhchap_digests": [ 00:05:48.466 "sha256", 00:05:48.466 "sha384", 00:05:48.466 "sha512" 00:05:48.466 ], 00:05:48.466 "dhchap_dhgroups": [ 00:05:48.466 "null", 00:05:48.466 "ffdhe2048", 00:05:48.466 "ffdhe3072", 00:05:48.466 "ffdhe4096", 00:05:48.466 "ffdhe6144", 00:05:48.466 "ffdhe8192" 00:05:48.466 ] 00:05:48.466 } 00:05:48.466 }, 00:05:48.466 { 00:05:48.466 "method": "bdev_nvme_set_hotplug", 00:05:48.466 "params": { 00:05:48.466 "period_us": 100000, 00:05:48.466 "enable": false 00:05:48.466 } 00:05:48.466 }, 00:05:48.466 { 00:05:48.466 "method": "bdev_wait_for_examine" 00:05:48.466 } 00:05:48.466 ] 00:05:48.466 }, 00:05:48.466 { 00:05:48.466 "subsystem": "scsi", 00:05:48.466 "config": null 00:05:48.466 }, 00:05:48.466 { 00:05:48.466 "subsystem": "scheduler", 00:05:48.466 "config": [ 00:05:48.466 { 00:05:48.466 "method": "framework_set_scheduler", 00:05:48.466 "params": { 00:05:48.466 "name": "static" 00:05:48.466 } 00:05:48.466 } 00:05:48.466 ] 00:05:48.466 }, 00:05:48.466 { 00:05:48.466 "subsystem": "vhost_scsi", 00:05:48.466 "config": [] 00:05:48.466 }, 00:05:48.466 { 00:05:48.466 "subsystem": "vhost_blk", 00:05:48.466 "config": [] 00:05:48.466 }, 00:05:48.466 { 00:05:48.466 "subsystem": "ublk", 00:05:48.466 "config": [] 00:05:48.466 }, 00:05:48.466 { 00:05:48.466 "subsystem": "nbd", 00:05:48.466 "config": [] 00:05:48.466 }, 00:05:48.466 { 00:05:48.466 "subsystem": "nvmf", 00:05:48.466 "config": [ 00:05:48.466 { 00:05:48.466 "method": "nvmf_set_config", 00:05:48.466 "params": { 00:05:48.466 "discovery_filter": "match_any", 00:05:48.466 "admin_cmd_passthru": { 00:05:48.466 "identify_ctrlr": false 00:05:48.466 }, 00:05:48.466 "dhchap_digests": [ 00:05:48.466 "sha256", 00:05:48.466 "sha384", 00:05:48.466 "sha512" 00:05:48.466 ], 00:05:48.466 "dhchap_dhgroups": [ 00:05:48.466 "null", 00:05:48.466 "ffdhe2048", 00:05:48.466 "ffdhe3072", 00:05:48.466 "ffdhe4096", 00:05:48.466 "ffdhe6144", 00:05:48.466 "ffdhe8192" 00:05:48.466 ] 00:05:48.466 } 00:05:48.466 }, 00:05:48.466 { 00:05:48.466 "method": "nvmf_set_max_subsystems", 00:05:48.466 "params": { 00:05:48.466 "max_subsystems": 1024 00:05:48.466 } 00:05:48.466 }, 00:05:48.466 { 00:05:48.466 "method": "nvmf_set_crdt", 00:05:48.466 "params": { 00:05:48.466 "crdt1": 0, 00:05:48.466 "crdt2": 0, 00:05:48.466 "crdt3": 0 00:05:48.466 } 00:05:48.466 }, 00:05:48.466 { 00:05:48.466 "method": "nvmf_create_transport", 00:05:48.466 "params": { 00:05:48.466 "trtype": "TCP", 00:05:48.466 "max_queue_depth": 128, 00:05:48.467 "max_io_qpairs_per_ctrlr": 127, 00:05:48.467 "in_capsule_data_size": 4096, 00:05:48.467 "max_io_size": 131072, 00:05:48.467 "io_unit_size": 131072, 00:05:48.467 "max_aq_depth": 128, 00:05:48.467 "num_shared_buffers": 511, 00:05:48.467 "buf_cache_size": 4294967295, 00:05:48.467 "dif_insert_or_strip": false, 00:05:48.467 "zcopy": false, 00:05:48.467 "c2h_success": true, 00:05:48.467 "sock_priority": 0, 00:05:48.467 "abort_timeout_sec": 1, 00:05:48.467 "ack_timeout": 0, 00:05:48.467 "data_wr_pool_size": 0 00:05:48.467 } 00:05:48.467 } 00:05:48.467 ] 00:05:48.467 }, 00:05:48.467 { 00:05:48.467 "subsystem": "iscsi", 00:05:48.467 "config": [ 00:05:48.467 { 00:05:48.467 "method": "iscsi_set_options", 00:05:48.467 "params": { 00:05:48.467 "node_base": "iqn.2016-06.io.spdk", 00:05:48.467 "max_sessions": 128, 00:05:48.467 "max_connections_per_session": 2, 00:05:48.467 "max_queue_depth": 64, 00:05:48.467 "default_time2wait": 2, 00:05:48.467 "default_time2retain": 20, 00:05:48.467 "first_burst_length": 8192, 00:05:48.467 "immediate_data": true, 00:05:48.467 "allow_duplicated_isid": false, 00:05:48.467 "error_recovery_level": 0, 00:05:48.467 "nop_timeout": 60, 00:05:48.467 "nop_in_interval": 30, 00:05:48.467 "disable_chap": false, 00:05:48.467 "require_chap": false, 00:05:48.467 "mutual_chap": false, 00:05:48.467 "chap_group": 0, 00:05:48.467 "max_large_datain_per_connection": 64, 00:05:48.467 "max_r2t_per_connection": 4, 00:05:48.467 "pdu_pool_size": 36864, 00:05:48.467 "immediate_data_pool_size": 16384, 00:05:48.467 "data_out_pool_size": 2048 00:05:48.467 } 00:05:48.467 } 00:05:48.467 ] 00:05:48.467 } 00:05:48.467 ] 00:05:48.467 } 00:05:48.467 21:38:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:48.467 21:38:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69337 00:05:48.467 21:38:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69337 ']' 00:05:48.467 21:38:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69337 00:05:48.467 21:38:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:48.467 21:38:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.467 21:38:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69337 00:05:48.467 killing process with pid 69337 00:05:48.467 21:38:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:48.467 21:38:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:48.467 21:38:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69337' 00:05:48.467 21:38:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69337 00:05:48.467 21:38:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69337 00:05:48.726 21:38:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69360 00:05:48.726 21:38:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:48.726 21:38:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:53.990 21:38:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69360 00:05:53.990 21:38:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69360 ']' 00:05:53.990 21:38:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69360 00:05:53.990 21:38:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:53.990 21:38:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.990 21:38:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69360 00:05:53.990 killing process with pid 69360 00:05:53.990 21:38:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.990 21:38:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.990 21:38:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69360' 00:05:53.990 21:38:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69360 00:05:53.990 21:38:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69360 00:05:53.990 21:38:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:53.990 21:38:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:53.990 ************************************ 00:05:53.990 END TEST skip_rpc_with_json 00:05:53.990 ************************************ 00:05:53.990 00:05:53.990 real 0m6.566s 00:05:53.990 user 0m6.287s 00:05:53.990 sys 0m0.502s 00:05:53.990 21:38:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.990 21:38:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:53.990 21:38:16 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:53.990 21:38:16 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.990 21:38:16 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.990 21:38:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.990 ************************************ 00:05:53.990 START TEST skip_rpc_with_delay 00:05:53.990 ************************************ 00:05:53.990 21:38:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:53.990 21:38:16 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:53.990 21:38:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:53.990 21:38:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:53.990 21:38:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:53.990 21:38:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:53.990 21:38:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:53.990 21:38:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:53.990 21:38:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:53.990 21:38:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:53.990 21:38:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:53.990 21:38:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:53.990 21:38:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:53.990 [2024-11-27 21:38:16.991560] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:53.990 ************************************ 00:05:53.990 END TEST skip_rpc_with_delay 00:05:53.990 ************************************ 00:05:53.991 21:38:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:53.991 21:38:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:53.991 21:38:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:53.991 21:38:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:53.991 00:05:53.991 real 0m0.117s 00:05:53.991 user 0m0.062s 00:05:53.991 sys 0m0.053s 00:05:53.991 21:38:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.991 21:38:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:53.991 21:38:17 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:53.991 21:38:17 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:53.991 21:38:17 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:53.991 21:38:17 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.991 21:38:17 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.991 21:38:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.991 ************************************ 00:05:53.991 START TEST exit_on_failed_rpc_init 00:05:53.991 ************************************ 00:05:53.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.991 21:38:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:53.991 21:38:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69471 00:05:53.991 21:38:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69471 00:05:53.991 21:38:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:53.991 21:38:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 69471 ']' 00:05:53.991 21:38:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.991 21:38:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.991 21:38:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.991 21:38:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.991 21:38:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:54.249 [2024-11-27 21:38:17.152927] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:54.249 [2024-11-27 21:38:17.153047] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69471 ] 00:05:54.249 [2024-11-27 21:38:17.287304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.249 [2024-11-27 21:38:17.303452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.184 21:38:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.184 21:38:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:55.184 21:38:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:55.184 21:38:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:55.184 21:38:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:55.184 21:38:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:55.184 21:38:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:55.184 21:38:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.184 21:38:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:55.184 21:38:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.184 21:38:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:55.184 21:38:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.184 21:38:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:55.184 21:38:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:55.184 21:38:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:55.184 [2024-11-27 21:38:18.017947] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:55.184 [2024-11-27 21:38:18.018059] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69489 ] 00:05:55.184 [2024-11-27 21:38:18.163473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.184 [2024-11-27 21:38:18.181181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.184 [2024-11-27 21:38:18.181272] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:55.184 [2024-11-27 21:38:18.181287] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:55.184 [2024-11-27 21:38:18.181297] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:55.184 21:38:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:55.184 21:38:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:55.184 21:38:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:55.184 21:38:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:55.185 21:38:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:55.185 21:38:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:55.185 21:38:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:55.185 21:38:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69471 00:05:55.185 21:38:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 69471 ']' 00:05:55.185 21:38:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 69471 00:05:55.185 21:38:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:55.185 21:38:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.185 21:38:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69471 00:05:55.185 killing process with pid 69471 00:05:55.185 21:38:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.185 21:38:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.185 21:38:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69471' 00:05:55.185 21:38:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 69471 00:05:55.185 21:38:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 69471 00:05:55.446 ************************************ 00:05:55.446 END TEST exit_on_failed_rpc_init 00:05:55.446 ************************************ 00:05:55.446 00:05:55.446 real 0m1.400s 00:05:55.446 user 0m1.541s 00:05:55.446 sys 0m0.312s 00:05:55.446 21:38:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.446 21:38:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:55.446 21:38:18 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:55.446 ************************************ 00:05:55.446 END TEST skip_rpc 00:05:55.446 ************************************ 00:05:55.446 00:05:55.446 real 0m13.691s 00:05:55.446 user 0m12.969s 00:05:55.446 sys 0m1.248s 00:05:55.446 21:38:18 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.446 21:38:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.707 21:38:18 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:55.707 21:38:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.707 21:38:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.707 21:38:18 -- common/autotest_common.sh@10 -- # set +x 00:05:55.707 ************************************ 00:05:55.707 START TEST rpc_client 00:05:55.707 ************************************ 00:05:55.707 21:38:18 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:55.707 * Looking for test storage... 00:05:55.707 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:55.707 21:38:18 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:55.707 21:38:18 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:55.707 21:38:18 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:55.707 21:38:18 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:55.707 21:38:18 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.707 21:38:18 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.707 21:38:18 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.708 21:38:18 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.708 21:38:18 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.708 21:38:18 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.708 21:38:18 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.708 21:38:18 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.708 21:38:18 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.708 21:38:18 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.708 21:38:18 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.708 21:38:18 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:55.708 21:38:18 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:55.708 21:38:18 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.708 21:38:18 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.708 21:38:18 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:55.708 21:38:18 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:55.708 21:38:18 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.708 21:38:18 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:55.708 21:38:18 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.708 21:38:18 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:55.708 21:38:18 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:55.708 21:38:18 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.708 21:38:18 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:55.708 21:38:18 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.708 21:38:18 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.708 21:38:18 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.708 21:38:18 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:55.708 21:38:18 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.708 21:38:18 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:55.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.708 --rc genhtml_branch_coverage=1 00:05:55.708 --rc genhtml_function_coverage=1 00:05:55.708 --rc genhtml_legend=1 00:05:55.708 --rc geninfo_all_blocks=1 00:05:55.708 --rc geninfo_unexecuted_blocks=1 00:05:55.708 00:05:55.708 ' 00:05:55.708 21:38:18 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:55.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.708 --rc genhtml_branch_coverage=1 00:05:55.708 --rc genhtml_function_coverage=1 00:05:55.708 --rc genhtml_legend=1 00:05:55.708 --rc geninfo_all_blocks=1 00:05:55.708 --rc geninfo_unexecuted_blocks=1 00:05:55.708 00:05:55.708 ' 00:05:55.708 21:38:18 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:55.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.708 --rc genhtml_branch_coverage=1 00:05:55.708 --rc genhtml_function_coverage=1 00:05:55.708 --rc genhtml_legend=1 00:05:55.708 --rc geninfo_all_blocks=1 00:05:55.708 --rc geninfo_unexecuted_blocks=1 00:05:55.708 00:05:55.708 ' 00:05:55.708 21:38:18 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:55.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.708 --rc genhtml_branch_coverage=1 00:05:55.708 --rc genhtml_function_coverage=1 00:05:55.708 --rc genhtml_legend=1 00:05:55.708 --rc geninfo_all_blocks=1 00:05:55.708 --rc geninfo_unexecuted_blocks=1 00:05:55.708 00:05:55.708 ' 00:05:55.708 21:38:18 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:55.708 OK 00:05:55.708 21:38:18 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:55.708 00:05:55.708 real 0m0.179s 00:05:55.708 user 0m0.102s 00:05:55.708 sys 0m0.083s 00:05:55.708 21:38:18 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.708 21:38:18 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:55.708 ************************************ 00:05:55.708 END TEST rpc_client 00:05:55.708 ************************************ 00:05:55.708 21:38:18 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:55.708 21:38:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.708 21:38:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.708 21:38:18 -- common/autotest_common.sh@10 -- # set +x 00:05:55.708 ************************************ 00:05:55.708 START TEST json_config 00:05:55.708 ************************************ 00:05:55.708 21:38:18 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:55.970 21:38:18 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:55.970 21:38:18 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:55.970 21:38:18 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:55.970 21:38:18 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:55.970 21:38:18 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.970 21:38:18 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.970 21:38:18 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.970 21:38:18 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.970 21:38:18 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.970 21:38:18 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.970 21:38:18 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.970 21:38:18 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.970 21:38:18 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.970 21:38:18 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.970 21:38:18 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.970 21:38:18 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:55.970 21:38:18 json_config -- scripts/common.sh@345 -- # : 1 00:05:55.970 21:38:18 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.970 21:38:18 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.970 21:38:18 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:55.970 21:38:18 json_config -- scripts/common.sh@353 -- # local d=1 00:05:55.970 21:38:18 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.970 21:38:18 json_config -- scripts/common.sh@355 -- # echo 1 00:05:55.970 21:38:18 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.970 21:38:18 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:55.970 21:38:18 json_config -- scripts/common.sh@353 -- # local d=2 00:05:55.970 21:38:18 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.970 21:38:18 json_config -- scripts/common.sh@355 -- # echo 2 00:05:55.970 21:38:18 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.970 21:38:18 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.970 21:38:18 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.970 21:38:18 json_config -- scripts/common.sh@368 -- # return 0 00:05:55.970 21:38:18 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.970 21:38:18 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:55.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.970 --rc genhtml_branch_coverage=1 00:05:55.970 --rc genhtml_function_coverage=1 00:05:55.970 --rc genhtml_legend=1 00:05:55.970 --rc geninfo_all_blocks=1 00:05:55.970 --rc geninfo_unexecuted_blocks=1 00:05:55.970 00:05:55.970 ' 00:05:55.970 21:38:18 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:55.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.970 --rc genhtml_branch_coverage=1 00:05:55.970 --rc genhtml_function_coverage=1 00:05:55.970 --rc genhtml_legend=1 00:05:55.970 --rc geninfo_all_blocks=1 00:05:55.970 --rc geninfo_unexecuted_blocks=1 00:05:55.970 00:05:55.970 ' 00:05:55.970 21:38:18 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:55.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.970 --rc genhtml_branch_coverage=1 00:05:55.970 --rc genhtml_function_coverage=1 00:05:55.970 --rc genhtml_legend=1 00:05:55.970 --rc geninfo_all_blocks=1 00:05:55.970 --rc geninfo_unexecuted_blocks=1 00:05:55.970 00:05:55.970 ' 00:05:55.970 21:38:18 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:55.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.970 --rc genhtml_branch_coverage=1 00:05:55.970 --rc genhtml_function_coverage=1 00:05:55.970 --rc genhtml_legend=1 00:05:55.970 --rc geninfo_all_blocks=1 00:05:55.970 --rc geninfo_unexecuted_blocks=1 00:05:55.970 00:05:55.970 ' 00:05:55.970 21:38:18 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:55.970 21:38:18 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:55.970 21:38:18 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:55.970 21:38:18 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:55.970 21:38:18 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:55.970 21:38:18 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:55.970 21:38:18 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:55.970 21:38:18 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:55.970 21:38:18 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:55.970 21:38:18 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:55.970 21:38:18 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:55.970 21:38:18 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:55.970 21:38:18 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dedf263a-1388-4f83-8c1a-6d151fbf491d 00:05:55.970 21:38:18 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=dedf263a-1388-4f83-8c1a-6d151fbf491d 00:05:55.970 21:38:18 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:55.970 21:38:18 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:55.970 21:38:18 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:55.970 21:38:18 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:55.970 21:38:18 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:55.970 21:38:18 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:55.970 21:38:18 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:55.970 21:38:18 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:55.970 21:38:18 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:55.970 21:38:18 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.970 21:38:18 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.970 21:38:18 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.970 21:38:18 json_config -- paths/export.sh@5 -- # export PATH 00:05:55.970 21:38:18 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.970 21:38:18 json_config -- nvmf/common.sh@51 -- # : 0 00:05:55.970 21:38:18 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:55.970 21:38:18 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:55.970 21:38:18 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:55.970 21:38:18 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:55.970 21:38:18 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:55.970 21:38:18 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:55.970 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:55.971 21:38:18 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:55.971 21:38:18 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:55.971 21:38:18 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:55.971 21:38:18 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:55.971 21:38:18 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:55.971 21:38:18 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:55.971 21:38:18 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:55.971 21:38:18 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:55.971 WARNING: No tests are enabled so not running JSON configuration tests 00:05:55.971 21:38:18 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:55.971 21:38:18 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:55.971 00:05:55.971 real 0m0.144s 00:05:55.971 user 0m0.087s 00:05:55.971 sys 0m0.056s 00:05:55.971 21:38:18 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.971 21:38:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.971 ************************************ 00:05:55.971 END TEST json_config 00:05:55.971 ************************************ 00:05:55.971 21:38:19 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:55.971 21:38:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.971 21:38:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.971 21:38:19 -- common/autotest_common.sh@10 -- # set +x 00:05:55.971 ************************************ 00:05:55.971 START TEST json_config_extra_key 00:05:55.971 ************************************ 00:05:55.971 21:38:19 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:55.971 21:38:19 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:55.971 21:38:19 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:55.971 21:38:19 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:56.232 21:38:19 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:56.232 21:38:19 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.232 21:38:19 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.232 21:38:19 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.232 21:38:19 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.232 21:38:19 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.232 21:38:19 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.232 21:38:19 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.232 21:38:19 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.232 21:38:19 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.232 21:38:19 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.232 21:38:19 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.232 21:38:19 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:56.232 21:38:19 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:56.232 21:38:19 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.232 21:38:19 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.232 21:38:19 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:56.232 21:38:19 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:56.232 21:38:19 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.232 21:38:19 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:56.232 21:38:19 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.232 21:38:19 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:56.232 21:38:19 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:56.232 21:38:19 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.232 21:38:19 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:56.232 21:38:19 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.232 21:38:19 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.232 21:38:19 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.232 21:38:19 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:56.232 21:38:19 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.232 21:38:19 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:56.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.232 --rc genhtml_branch_coverage=1 00:05:56.232 --rc genhtml_function_coverage=1 00:05:56.232 --rc genhtml_legend=1 00:05:56.232 --rc geninfo_all_blocks=1 00:05:56.232 --rc geninfo_unexecuted_blocks=1 00:05:56.232 00:05:56.232 ' 00:05:56.232 21:38:19 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:56.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.232 --rc genhtml_branch_coverage=1 00:05:56.232 --rc genhtml_function_coverage=1 00:05:56.232 --rc genhtml_legend=1 00:05:56.232 --rc geninfo_all_blocks=1 00:05:56.232 --rc geninfo_unexecuted_blocks=1 00:05:56.232 00:05:56.232 ' 00:05:56.232 21:38:19 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:56.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.232 --rc genhtml_branch_coverage=1 00:05:56.232 --rc genhtml_function_coverage=1 00:05:56.232 --rc genhtml_legend=1 00:05:56.232 --rc geninfo_all_blocks=1 00:05:56.232 --rc geninfo_unexecuted_blocks=1 00:05:56.232 00:05:56.232 ' 00:05:56.232 21:38:19 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:56.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.232 --rc genhtml_branch_coverage=1 00:05:56.232 --rc genhtml_function_coverage=1 00:05:56.232 --rc genhtml_legend=1 00:05:56.232 --rc geninfo_all_blocks=1 00:05:56.232 --rc geninfo_unexecuted_blocks=1 00:05:56.232 00:05:56.232 ' 00:05:56.232 21:38:19 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:56.232 21:38:19 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:56.232 21:38:19 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:56.232 21:38:19 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:56.232 21:38:19 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:56.232 21:38:19 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:56.232 21:38:19 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:56.232 21:38:19 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:56.232 21:38:19 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:56.232 21:38:19 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:56.232 21:38:19 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:56.232 21:38:19 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:56.232 21:38:19 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dedf263a-1388-4f83-8c1a-6d151fbf491d 00:05:56.233 21:38:19 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=dedf263a-1388-4f83-8c1a-6d151fbf491d 00:05:56.233 21:38:19 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:56.233 21:38:19 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:56.233 21:38:19 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:56.233 21:38:19 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:56.233 21:38:19 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:56.233 21:38:19 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:56.233 21:38:19 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:56.233 21:38:19 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:56.233 21:38:19 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:56.233 21:38:19 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.233 21:38:19 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.233 21:38:19 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.233 21:38:19 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:56.233 21:38:19 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:56.233 21:38:19 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:56.233 21:38:19 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:56.233 21:38:19 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:56.233 21:38:19 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:56.233 21:38:19 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:56.233 21:38:19 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:56.233 21:38:19 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:56.233 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:56.233 21:38:19 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:56.233 21:38:19 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:56.233 21:38:19 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:56.233 21:38:19 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:56.233 21:38:19 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:56.233 21:38:19 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:56.233 21:38:19 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:56.233 21:38:19 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:56.233 21:38:19 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:56.233 21:38:19 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:56.233 21:38:19 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:56.233 21:38:19 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:56.233 21:38:19 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:56.233 INFO: launching applications... 00:05:56.233 21:38:19 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:56.233 21:38:19 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:56.233 21:38:19 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:56.233 21:38:19 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:56.233 21:38:19 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:56.233 21:38:19 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:56.233 21:38:19 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:56.233 21:38:19 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:56.233 21:38:19 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:56.233 21:38:19 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=69666 00:05:56.233 21:38:19 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:56.233 Waiting for target to run... 00:05:56.233 21:38:19 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 69666 /var/tmp/spdk_tgt.sock 00:05:56.233 21:38:19 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 69666 ']' 00:05:56.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:56.233 21:38:19 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:56.233 21:38:19 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:56.233 21:38:19 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.233 21:38:19 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:56.233 21:38:19 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.233 21:38:19 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:56.233 [2024-11-27 21:38:19.240756] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:56.233 [2024-11-27 21:38:19.241027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69666 ] 00:05:56.494 [2024-11-27 21:38:19.544571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.494 [2024-11-27 21:38:19.556001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.067 21:38:20 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.067 21:38:20 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:57.067 21:38:20 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:57.067 00:05:57.067 21:38:20 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:57.067 INFO: shutting down applications... 00:05:57.067 21:38:20 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:57.067 21:38:20 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:57.067 21:38:20 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:57.067 21:38:20 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 69666 ]] 00:05:57.067 21:38:20 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 69666 00:05:57.067 21:38:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:57.067 21:38:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:57.067 21:38:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69666 00:05:57.067 21:38:20 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:57.639 SPDK target shutdown done 00:05:57.639 21:38:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:57.639 21:38:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:57.639 21:38:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69666 00:05:57.639 21:38:20 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:57.639 21:38:20 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:57.639 21:38:20 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:57.639 21:38:20 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:57.639 Success 00:05:57.639 21:38:20 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:57.639 ************************************ 00:05:57.639 END TEST json_config_extra_key 00:05:57.639 ************************************ 00:05:57.639 00:05:57.639 real 0m1.558s 00:05:57.639 user 0m1.254s 00:05:57.639 sys 0m0.347s 00:05:57.639 21:38:20 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.639 21:38:20 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:57.639 21:38:20 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:57.639 21:38:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.639 21:38:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.639 21:38:20 -- common/autotest_common.sh@10 -- # set +x 00:05:57.639 ************************************ 00:05:57.639 START TEST alias_rpc 00:05:57.639 ************************************ 00:05:57.639 21:38:20 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:57.639 * Looking for test storage... 00:05:57.639 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:57.639 21:38:20 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:57.639 21:38:20 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:57.639 21:38:20 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:57.899 21:38:20 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:57.899 21:38:20 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.899 21:38:20 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.899 21:38:20 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.899 21:38:20 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.899 21:38:20 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.899 21:38:20 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.899 21:38:20 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.899 21:38:20 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.899 21:38:20 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.899 21:38:20 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.899 21:38:20 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.899 21:38:20 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:57.899 21:38:20 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:57.899 21:38:20 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.899 21:38:20 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.899 21:38:20 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:57.899 21:38:20 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:57.899 21:38:20 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.899 21:38:20 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:57.899 21:38:20 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.899 21:38:20 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:57.899 21:38:20 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:57.899 21:38:20 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.899 21:38:20 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:57.899 21:38:20 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.899 21:38:20 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.899 21:38:20 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.899 21:38:20 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:57.899 21:38:20 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.899 21:38:20 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:57.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.899 --rc genhtml_branch_coverage=1 00:05:57.899 --rc genhtml_function_coverage=1 00:05:57.899 --rc genhtml_legend=1 00:05:57.899 --rc geninfo_all_blocks=1 00:05:57.899 --rc geninfo_unexecuted_blocks=1 00:05:57.899 00:05:57.899 ' 00:05:57.899 21:38:20 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:57.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.899 --rc genhtml_branch_coverage=1 00:05:57.899 --rc genhtml_function_coverage=1 00:05:57.899 --rc genhtml_legend=1 00:05:57.899 --rc geninfo_all_blocks=1 00:05:57.899 --rc geninfo_unexecuted_blocks=1 00:05:57.899 00:05:57.899 ' 00:05:57.899 21:38:20 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:57.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.899 --rc genhtml_branch_coverage=1 00:05:57.899 --rc genhtml_function_coverage=1 00:05:57.899 --rc genhtml_legend=1 00:05:57.899 --rc geninfo_all_blocks=1 00:05:57.899 --rc geninfo_unexecuted_blocks=1 00:05:57.899 00:05:57.899 ' 00:05:57.899 21:38:20 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:57.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.899 --rc genhtml_branch_coverage=1 00:05:57.899 --rc genhtml_function_coverage=1 00:05:57.899 --rc genhtml_legend=1 00:05:57.899 --rc geninfo_all_blocks=1 00:05:57.899 --rc geninfo_unexecuted_blocks=1 00:05:57.899 00:05:57.899 ' 00:05:57.899 21:38:20 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:57.899 21:38:20 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=69740 00:05:57.899 21:38:20 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 69740 00:05:57.899 21:38:20 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 69740 ']' 00:05:57.899 21:38:20 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.899 21:38:20 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.899 21:38:20 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.899 21:38:20 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.899 21:38:20 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.899 21:38:20 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:57.899 [2024-11-27 21:38:20.865063] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:57.899 [2024-11-27 21:38:20.865183] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69740 ] 00:05:57.899 [2024-11-27 21:38:21.009424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.159 [2024-11-27 21:38:21.028476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.730 21:38:21 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.730 21:38:21 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:58.730 21:38:21 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:58.990 21:38:21 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 69740 00:05:58.990 21:38:21 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 69740 ']' 00:05:58.990 21:38:21 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 69740 00:05:58.990 21:38:21 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:58.990 21:38:21 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:58.990 21:38:21 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69740 00:05:58.990 killing process with pid 69740 00:05:58.990 21:38:21 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:58.990 21:38:21 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:58.990 21:38:21 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69740' 00:05:58.990 21:38:21 alias_rpc -- common/autotest_common.sh@973 -- # kill 69740 00:05:58.990 21:38:21 alias_rpc -- common/autotest_common.sh@978 -- # wait 69740 00:05:59.249 ************************************ 00:05:59.249 END TEST alias_rpc 00:05:59.249 ************************************ 00:05:59.249 00:05:59.249 real 0m1.554s 00:05:59.249 user 0m1.673s 00:05:59.249 sys 0m0.377s 00:05:59.249 21:38:22 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.249 21:38:22 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.249 21:38:22 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:59.249 21:38:22 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:59.249 21:38:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.249 21:38:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.249 21:38:22 -- common/autotest_common.sh@10 -- # set +x 00:05:59.249 ************************************ 00:05:59.249 START TEST spdkcli_tcp 00:05:59.249 ************************************ 00:05:59.249 21:38:22 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:59.249 * Looking for test storage... 00:05:59.249 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:59.249 21:38:22 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:59.249 21:38:22 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:59.249 21:38:22 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:59.511 21:38:22 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:59.511 21:38:22 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:59.511 21:38:22 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:59.511 21:38:22 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:59.511 21:38:22 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.511 21:38:22 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:59.511 21:38:22 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:59.511 21:38:22 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:59.511 21:38:22 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:59.511 21:38:22 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:59.511 21:38:22 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:59.511 21:38:22 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:59.511 21:38:22 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:59.511 21:38:22 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:59.511 21:38:22 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:59.511 21:38:22 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.511 21:38:22 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:59.511 21:38:22 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:59.511 21:38:22 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.511 21:38:22 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:59.511 21:38:22 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:59.511 21:38:22 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:59.511 21:38:22 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:59.511 21:38:22 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.511 21:38:22 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:59.511 21:38:22 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:59.511 21:38:22 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:59.511 21:38:22 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:59.511 21:38:22 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:59.511 21:38:22 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.511 21:38:22 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:59.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.511 --rc genhtml_branch_coverage=1 00:05:59.511 --rc genhtml_function_coverage=1 00:05:59.511 --rc genhtml_legend=1 00:05:59.511 --rc geninfo_all_blocks=1 00:05:59.511 --rc geninfo_unexecuted_blocks=1 00:05:59.511 00:05:59.511 ' 00:05:59.511 21:38:22 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:59.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.511 --rc genhtml_branch_coverage=1 00:05:59.511 --rc genhtml_function_coverage=1 00:05:59.511 --rc genhtml_legend=1 00:05:59.511 --rc geninfo_all_blocks=1 00:05:59.511 --rc geninfo_unexecuted_blocks=1 00:05:59.511 00:05:59.511 ' 00:05:59.511 21:38:22 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:59.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.511 --rc genhtml_branch_coverage=1 00:05:59.511 --rc genhtml_function_coverage=1 00:05:59.511 --rc genhtml_legend=1 00:05:59.511 --rc geninfo_all_blocks=1 00:05:59.511 --rc geninfo_unexecuted_blocks=1 00:05:59.511 00:05:59.511 ' 00:05:59.511 21:38:22 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:59.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.511 --rc genhtml_branch_coverage=1 00:05:59.511 --rc genhtml_function_coverage=1 00:05:59.511 --rc genhtml_legend=1 00:05:59.511 --rc geninfo_all_blocks=1 00:05:59.511 --rc geninfo_unexecuted_blocks=1 00:05:59.511 00:05:59.511 ' 00:05:59.511 21:38:22 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:59.511 21:38:22 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:59.511 21:38:22 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:59.511 21:38:22 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:59.511 21:38:22 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:59.511 21:38:22 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:59.511 21:38:22 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:59.511 21:38:22 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:59.511 21:38:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:59.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.511 21:38:22 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=69819 00:05:59.511 21:38:22 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 69819 00:05:59.511 21:38:22 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 69819 ']' 00:05:59.511 21:38:22 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.511 21:38:22 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.511 21:38:22 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.511 21:38:22 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.511 21:38:22 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:59.511 21:38:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:59.511 [2024-11-27 21:38:22.475456] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:59.511 [2024-11-27 21:38:22.475723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69819 ] 00:05:59.511 [2024-11-27 21:38:22.620162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:59.772 [2024-11-27 21:38:22.640645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.772 [2024-11-27 21:38:22.640695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.342 21:38:23 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.342 21:38:23 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:00.342 21:38:23 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=69836 00:06:00.342 21:38:23 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:00.342 21:38:23 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:00.600 [ 00:06:00.600 "bdev_malloc_delete", 00:06:00.600 "bdev_malloc_create", 00:06:00.600 "bdev_null_resize", 00:06:00.600 "bdev_null_delete", 00:06:00.600 "bdev_null_create", 00:06:00.600 "bdev_nvme_cuse_unregister", 00:06:00.600 "bdev_nvme_cuse_register", 00:06:00.600 "bdev_opal_new_user", 00:06:00.600 "bdev_opal_set_lock_state", 00:06:00.600 "bdev_opal_delete", 00:06:00.600 "bdev_opal_get_info", 00:06:00.600 "bdev_opal_create", 00:06:00.600 "bdev_nvme_opal_revert", 00:06:00.600 "bdev_nvme_opal_init", 00:06:00.600 "bdev_nvme_send_cmd", 00:06:00.600 "bdev_nvme_set_keys", 00:06:00.600 "bdev_nvme_get_path_iostat", 00:06:00.600 "bdev_nvme_get_mdns_discovery_info", 00:06:00.600 "bdev_nvme_stop_mdns_discovery", 00:06:00.600 "bdev_nvme_start_mdns_discovery", 00:06:00.600 "bdev_nvme_set_multipath_policy", 00:06:00.600 "bdev_nvme_set_preferred_path", 00:06:00.600 "bdev_nvme_get_io_paths", 00:06:00.600 "bdev_nvme_remove_error_injection", 00:06:00.600 "bdev_nvme_add_error_injection", 00:06:00.600 "bdev_nvme_get_discovery_info", 00:06:00.600 "bdev_nvme_stop_discovery", 00:06:00.600 "bdev_nvme_start_discovery", 00:06:00.600 "bdev_nvme_get_controller_health_info", 00:06:00.600 "bdev_nvme_disable_controller", 00:06:00.600 "bdev_nvme_enable_controller", 00:06:00.600 "bdev_nvme_reset_controller", 00:06:00.600 "bdev_nvme_get_transport_statistics", 00:06:00.600 "bdev_nvme_apply_firmware", 00:06:00.600 "bdev_nvme_detach_controller", 00:06:00.600 "bdev_nvme_get_controllers", 00:06:00.600 "bdev_nvme_attach_controller", 00:06:00.600 "bdev_nvme_set_hotplug", 00:06:00.600 "bdev_nvme_set_options", 00:06:00.600 "bdev_passthru_delete", 00:06:00.600 "bdev_passthru_create", 00:06:00.600 "bdev_lvol_set_parent_bdev", 00:06:00.600 "bdev_lvol_set_parent", 00:06:00.600 "bdev_lvol_check_shallow_copy", 00:06:00.600 "bdev_lvol_start_shallow_copy", 00:06:00.600 "bdev_lvol_grow_lvstore", 00:06:00.600 "bdev_lvol_get_lvols", 00:06:00.600 "bdev_lvol_get_lvstores", 00:06:00.600 "bdev_lvol_delete", 00:06:00.600 "bdev_lvol_set_read_only", 00:06:00.600 "bdev_lvol_resize", 00:06:00.600 "bdev_lvol_decouple_parent", 00:06:00.600 "bdev_lvol_inflate", 00:06:00.600 "bdev_lvol_rename", 00:06:00.600 "bdev_lvol_clone_bdev", 00:06:00.600 "bdev_lvol_clone", 00:06:00.600 "bdev_lvol_snapshot", 00:06:00.600 "bdev_lvol_create", 00:06:00.600 "bdev_lvol_delete_lvstore", 00:06:00.600 "bdev_lvol_rename_lvstore", 00:06:00.600 "bdev_lvol_create_lvstore", 00:06:00.600 "bdev_raid_set_options", 00:06:00.600 "bdev_raid_remove_base_bdev", 00:06:00.600 "bdev_raid_add_base_bdev", 00:06:00.600 "bdev_raid_delete", 00:06:00.600 "bdev_raid_create", 00:06:00.600 "bdev_raid_get_bdevs", 00:06:00.600 "bdev_error_inject_error", 00:06:00.600 "bdev_error_delete", 00:06:00.600 "bdev_error_create", 00:06:00.600 "bdev_split_delete", 00:06:00.600 "bdev_split_create", 00:06:00.600 "bdev_delay_delete", 00:06:00.600 "bdev_delay_create", 00:06:00.600 "bdev_delay_update_latency", 00:06:00.600 "bdev_zone_block_delete", 00:06:00.600 "bdev_zone_block_create", 00:06:00.600 "blobfs_create", 00:06:00.600 "blobfs_detect", 00:06:00.600 "blobfs_set_cache_size", 00:06:00.600 "bdev_xnvme_delete", 00:06:00.600 "bdev_xnvme_create", 00:06:00.600 "bdev_aio_delete", 00:06:00.600 "bdev_aio_rescan", 00:06:00.600 "bdev_aio_create", 00:06:00.600 "bdev_ftl_set_property", 00:06:00.600 "bdev_ftl_get_properties", 00:06:00.600 "bdev_ftl_get_stats", 00:06:00.600 "bdev_ftl_unmap", 00:06:00.600 "bdev_ftl_unload", 00:06:00.600 "bdev_ftl_delete", 00:06:00.600 "bdev_ftl_load", 00:06:00.600 "bdev_ftl_create", 00:06:00.600 "bdev_virtio_attach_controller", 00:06:00.600 "bdev_virtio_scsi_get_devices", 00:06:00.600 "bdev_virtio_detach_controller", 00:06:00.600 "bdev_virtio_blk_set_hotplug", 00:06:00.600 "bdev_iscsi_delete", 00:06:00.600 "bdev_iscsi_create", 00:06:00.600 "bdev_iscsi_set_options", 00:06:00.600 "accel_error_inject_error", 00:06:00.600 "ioat_scan_accel_module", 00:06:00.600 "dsa_scan_accel_module", 00:06:00.600 "iaa_scan_accel_module", 00:06:00.600 "keyring_file_remove_key", 00:06:00.600 "keyring_file_add_key", 00:06:00.600 "keyring_linux_set_options", 00:06:00.600 "fsdev_aio_delete", 00:06:00.600 "fsdev_aio_create", 00:06:00.600 "iscsi_get_histogram", 00:06:00.600 "iscsi_enable_histogram", 00:06:00.600 "iscsi_set_options", 00:06:00.600 "iscsi_get_auth_groups", 00:06:00.600 "iscsi_auth_group_remove_secret", 00:06:00.600 "iscsi_auth_group_add_secret", 00:06:00.600 "iscsi_delete_auth_group", 00:06:00.600 "iscsi_create_auth_group", 00:06:00.600 "iscsi_set_discovery_auth", 00:06:00.600 "iscsi_get_options", 00:06:00.600 "iscsi_target_node_request_logout", 00:06:00.600 "iscsi_target_node_set_redirect", 00:06:00.600 "iscsi_target_node_set_auth", 00:06:00.600 "iscsi_target_node_add_lun", 00:06:00.600 "iscsi_get_stats", 00:06:00.600 "iscsi_get_connections", 00:06:00.600 "iscsi_portal_group_set_auth", 00:06:00.600 "iscsi_start_portal_group", 00:06:00.600 "iscsi_delete_portal_group", 00:06:00.600 "iscsi_create_portal_group", 00:06:00.600 "iscsi_get_portal_groups", 00:06:00.600 "iscsi_delete_target_node", 00:06:00.600 "iscsi_target_node_remove_pg_ig_maps", 00:06:00.600 "iscsi_target_node_add_pg_ig_maps", 00:06:00.600 "iscsi_create_target_node", 00:06:00.600 "iscsi_get_target_nodes", 00:06:00.600 "iscsi_delete_initiator_group", 00:06:00.600 "iscsi_initiator_group_remove_initiators", 00:06:00.600 "iscsi_initiator_group_add_initiators", 00:06:00.600 "iscsi_create_initiator_group", 00:06:00.600 "iscsi_get_initiator_groups", 00:06:00.600 "nvmf_set_crdt", 00:06:00.600 "nvmf_set_config", 00:06:00.600 "nvmf_set_max_subsystems", 00:06:00.600 "nvmf_stop_mdns_prr", 00:06:00.600 "nvmf_publish_mdns_prr", 00:06:00.600 "nvmf_subsystem_get_listeners", 00:06:00.600 "nvmf_subsystem_get_qpairs", 00:06:00.600 "nvmf_subsystem_get_controllers", 00:06:00.600 "nvmf_get_stats", 00:06:00.600 "nvmf_get_transports", 00:06:00.600 "nvmf_create_transport", 00:06:00.600 "nvmf_get_targets", 00:06:00.600 "nvmf_delete_target", 00:06:00.600 "nvmf_create_target", 00:06:00.600 "nvmf_subsystem_allow_any_host", 00:06:00.600 "nvmf_subsystem_set_keys", 00:06:00.600 "nvmf_subsystem_remove_host", 00:06:00.600 "nvmf_subsystem_add_host", 00:06:00.600 "nvmf_ns_remove_host", 00:06:00.600 "nvmf_ns_add_host", 00:06:00.600 "nvmf_subsystem_remove_ns", 00:06:00.600 "nvmf_subsystem_set_ns_ana_group", 00:06:00.600 "nvmf_subsystem_add_ns", 00:06:00.600 "nvmf_subsystem_listener_set_ana_state", 00:06:00.600 "nvmf_discovery_get_referrals", 00:06:00.600 "nvmf_discovery_remove_referral", 00:06:00.600 "nvmf_discovery_add_referral", 00:06:00.600 "nvmf_subsystem_remove_listener", 00:06:00.600 "nvmf_subsystem_add_listener", 00:06:00.600 "nvmf_delete_subsystem", 00:06:00.600 "nvmf_create_subsystem", 00:06:00.600 "nvmf_get_subsystems", 00:06:00.600 "env_dpdk_get_mem_stats", 00:06:00.600 "nbd_get_disks", 00:06:00.600 "nbd_stop_disk", 00:06:00.600 "nbd_start_disk", 00:06:00.600 "ublk_recover_disk", 00:06:00.600 "ublk_get_disks", 00:06:00.600 "ublk_stop_disk", 00:06:00.600 "ublk_start_disk", 00:06:00.600 "ublk_destroy_target", 00:06:00.600 "ublk_create_target", 00:06:00.600 "virtio_blk_create_transport", 00:06:00.600 "virtio_blk_get_transports", 00:06:00.600 "vhost_controller_set_coalescing", 00:06:00.600 "vhost_get_controllers", 00:06:00.600 "vhost_delete_controller", 00:06:00.600 "vhost_create_blk_controller", 00:06:00.601 "vhost_scsi_controller_remove_target", 00:06:00.601 "vhost_scsi_controller_add_target", 00:06:00.601 "vhost_start_scsi_controller", 00:06:00.601 "vhost_create_scsi_controller", 00:06:00.601 "thread_set_cpumask", 00:06:00.601 "scheduler_set_options", 00:06:00.601 "framework_get_governor", 00:06:00.601 "framework_get_scheduler", 00:06:00.601 "framework_set_scheduler", 00:06:00.601 "framework_get_reactors", 00:06:00.601 "thread_get_io_channels", 00:06:00.601 "thread_get_pollers", 00:06:00.601 "thread_get_stats", 00:06:00.601 "framework_monitor_context_switch", 00:06:00.601 "spdk_kill_instance", 00:06:00.601 "log_enable_timestamps", 00:06:00.601 "log_get_flags", 00:06:00.601 "log_clear_flag", 00:06:00.601 "log_set_flag", 00:06:00.601 "log_get_level", 00:06:00.601 "log_set_level", 00:06:00.601 "log_get_print_level", 00:06:00.601 "log_set_print_level", 00:06:00.601 "framework_enable_cpumask_locks", 00:06:00.601 "framework_disable_cpumask_locks", 00:06:00.601 "framework_wait_init", 00:06:00.601 "framework_start_init", 00:06:00.601 "scsi_get_devices", 00:06:00.601 "bdev_get_histogram", 00:06:00.601 "bdev_enable_histogram", 00:06:00.601 "bdev_set_qos_limit", 00:06:00.601 "bdev_set_qd_sampling_period", 00:06:00.601 "bdev_get_bdevs", 00:06:00.601 "bdev_reset_iostat", 00:06:00.601 "bdev_get_iostat", 00:06:00.601 "bdev_examine", 00:06:00.601 "bdev_wait_for_examine", 00:06:00.601 "bdev_set_options", 00:06:00.601 "accel_get_stats", 00:06:00.601 "accel_set_options", 00:06:00.601 "accel_set_driver", 00:06:00.601 "accel_crypto_key_destroy", 00:06:00.601 "accel_crypto_keys_get", 00:06:00.601 "accel_crypto_key_create", 00:06:00.601 "accel_assign_opc", 00:06:00.601 "accel_get_module_info", 00:06:00.601 "accel_get_opc_assignments", 00:06:00.601 "vmd_rescan", 00:06:00.601 "vmd_remove_device", 00:06:00.601 "vmd_enable", 00:06:00.601 "sock_get_default_impl", 00:06:00.601 "sock_set_default_impl", 00:06:00.601 "sock_impl_set_options", 00:06:00.601 "sock_impl_get_options", 00:06:00.601 "iobuf_get_stats", 00:06:00.601 "iobuf_set_options", 00:06:00.601 "keyring_get_keys", 00:06:00.601 "framework_get_pci_devices", 00:06:00.601 "framework_get_config", 00:06:00.601 "framework_get_subsystems", 00:06:00.601 "fsdev_set_opts", 00:06:00.601 "fsdev_get_opts", 00:06:00.601 "trace_get_info", 00:06:00.601 "trace_get_tpoint_group_mask", 00:06:00.601 "trace_disable_tpoint_group", 00:06:00.601 "trace_enable_tpoint_group", 00:06:00.601 "trace_clear_tpoint_mask", 00:06:00.601 "trace_set_tpoint_mask", 00:06:00.601 "notify_get_notifications", 00:06:00.601 "notify_get_types", 00:06:00.601 "spdk_get_version", 00:06:00.601 "rpc_get_methods" 00:06:00.601 ] 00:06:00.601 21:38:23 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:00.601 21:38:23 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:00.601 21:38:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:00.601 21:38:23 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:00.601 21:38:23 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 69819 00:06:00.601 21:38:23 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 69819 ']' 00:06:00.601 21:38:23 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 69819 00:06:00.601 21:38:23 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:00.601 21:38:23 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.601 21:38:23 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69819 00:06:00.601 killing process with pid 69819 00:06:00.601 21:38:23 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:00.601 21:38:23 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:00.601 21:38:23 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69819' 00:06:00.601 21:38:23 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 69819 00:06:00.601 21:38:23 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 69819 00:06:00.858 ************************************ 00:06:00.858 END TEST spdkcli_tcp 00:06:00.858 ************************************ 00:06:00.858 00:06:00.858 real 0m1.552s 00:06:00.858 user 0m2.809s 00:06:00.858 sys 0m0.358s 00:06:00.858 21:38:23 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.858 21:38:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:00.858 21:38:23 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:00.858 21:38:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.858 21:38:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.858 21:38:23 -- common/autotest_common.sh@10 -- # set +x 00:06:00.858 ************************************ 00:06:00.858 START TEST dpdk_mem_utility 00:06:00.858 ************************************ 00:06:00.858 21:38:23 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:00.858 * Looking for test storage... 00:06:00.858 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:00.858 21:38:23 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:00.858 21:38:23 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:00.858 21:38:23 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:01.118 21:38:23 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:01.118 21:38:23 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.118 21:38:23 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.118 21:38:23 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.118 21:38:23 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.118 21:38:23 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.118 21:38:23 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.118 21:38:23 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.118 21:38:23 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.118 21:38:23 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.118 21:38:23 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.118 21:38:23 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.118 21:38:23 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:01.118 21:38:23 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:01.118 21:38:23 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.118 21:38:23 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.118 21:38:23 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:01.118 21:38:23 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:01.118 21:38:23 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.118 21:38:23 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:01.118 21:38:23 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.118 21:38:23 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:01.118 21:38:23 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:01.118 21:38:23 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.118 21:38:23 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:01.118 21:38:23 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.118 21:38:23 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.118 21:38:23 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.118 21:38:23 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:01.118 21:38:23 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.118 21:38:23 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:01.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.118 --rc genhtml_branch_coverage=1 00:06:01.118 --rc genhtml_function_coverage=1 00:06:01.118 --rc genhtml_legend=1 00:06:01.118 --rc geninfo_all_blocks=1 00:06:01.118 --rc geninfo_unexecuted_blocks=1 00:06:01.118 00:06:01.118 ' 00:06:01.118 21:38:23 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:01.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.118 --rc genhtml_branch_coverage=1 00:06:01.118 --rc genhtml_function_coverage=1 00:06:01.118 --rc genhtml_legend=1 00:06:01.118 --rc geninfo_all_blocks=1 00:06:01.118 --rc geninfo_unexecuted_blocks=1 00:06:01.118 00:06:01.118 ' 00:06:01.118 21:38:23 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:01.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.118 --rc genhtml_branch_coverage=1 00:06:01.118 --rc genhtml_function_coverage=1 00:06:01.118 --rc genhtml_legend=1 00:06:01.118 --rc geninfo_all_blocks=1 00:06:01.118 --rc geninfo_unexecuted_blocks=1 00:06:01.118 00:06:01.118 ' 00:06:01.118 21:38:23 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:01.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.118 --rc genhtml_branch_coverage=1 00:06:01.118 --rc genhtml_function_coverage=1 00:06:01.118 --rc genhtml_legend=1 00:06:01.118 --rc geninfo_all_blocks=1 00:06:01.118 --rc geninfo_unexecuted_blocks=1 00:06:01.118 00:06:01.118 ' 00:06:01.118 21:38:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:01.118 21:38:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=69914 00:06:01.118 21:38:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 69914 00:06:01.118 21:38:23 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 69914 ']' 00:06:01.118 21:38:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.118 21:38:23 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.118 21:38:23 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.118 21:38:23 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.118 21:38:23 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.118 21:38:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:01.118 [2024-11-27 21:38:24.070882] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:01.118 [2024-11-27 21:38:24.071108] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69914 ] 00:06:01.118 [2024-11-27 21:38:24.218619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.378 [2024-11-27 21:38:24.237637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.948 21:38:24 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.948 21:38:24 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:01.948 21:38:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:01.948 21:38:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:01.948 21:38:24 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.948 21:38:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:01.948 { 00:06:01.948 "filename": "/tmp/spdk_mem_dump.txt" 00:06:01.948 } 00:06:01.948 21:38:24 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.948 21:38:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:01.948 DPDK memory size 818.000000 MiB in 1 heap(s) 00:06:01.948 1 heaps totaling size 818.000000 MiB 00:06:01.948 size: 818.000000 MiB heap id: 0 00:06:01.948 end heaps---------- 00:06:01.948 9 mempools totaling size 603.782043 MiB 00:06:01.948 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:01.948 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:01.948 size: 100.555481 MiB name: bdev_io_69914 00:06:01.948 size: 50.003479 MiB name: msgpool_69914 00:06:01.948 size: 36.509338 MiB name: fsdev_io_69914 00:06:01.948 size: 21.763794 MiB name: PDU_Pool 00:06:01.948 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:01.948 size: 4.133484 MiB name: evtpool_69914 00:06:01.948 size: 0.026123 MiB name: Session_Pool 00:06:01.948 end mempools------- 00:06:01.948 6 memzones totaling size 4.142822 MiB 00:06:01.948 size: 1.000366 MiB name: RG_ring_0_69914 00:06:01.948 size: 1.000366 MiB name: RG_ring_1_69914 00:06:01.948 size: 1.000366 MiB name: RG_ring_4_69914 00:06:01.948 size: 1.000366 MiB name: RG_ring_5_69914 00:06:01.948 size: 0.125366 MiB name: RG_ring_2_69914 00:06:01.948 size: 0.015991 MiB name: RG_ring_3_69914 00:06:01.948 end memzones------- 00:06:01.948 21:38:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:01.949 heap id: 0 total size: 818.000000 MiB number of busy elements: 317 number of free elements: 15 00:06:01.949 list of free elements. size: 10.802490 MiB 00:06:01.949 element at address: 0x200019200000 with size: 0.999878 MiB 00:06:01.949 element at address: 0x200019400000 with size: 0.999878 MiB 00:06:01.949 element at address: 0x200032000000 with size: 0.994446 MiB 00:06:01.949 element at address: 0x200000400000 with size: 0.993958 MiB 00:06:01.949 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:01.949 element at address: 0x200012c00000 with size: 0.944275 MiB 00:06:01.949 element at address: 0x200019600000 with size: 0.936584 MiB 00:06:01.949 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:01.949 element at address: 0x20001ae00000 with size: 0.567688 MiB 00:06:01.949 element at address: 0x20000a600000 with size: 0.488892 MiB 00:06:01.949 element at address: 0x200000c00000 with size: 0.486267 MiB 00:06:01.949 element at address: 0x200019800000 with size: 0.485657 MiB 00:06:01.949 element at address: 0x200003e00000 with size: 0.480286 MiB 00:06:01.949 element at address: 0x200028200000 with size: 0.395752 MiB 00:06:01.949 element at address: 0x200000800000 with size: 0.351746 MiB 00:06:01.949 list of standard malloc elements. size: 199.268616 MiB 00:06:01.949 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:01.949 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:01.949 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:01.949 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:06:01.949 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:06:01.949 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:01.949 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:06:01.949 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:01.949 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:06:01.949 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:01.949 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:01.949 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:06:01.949 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:06:01.949 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:06:01.949 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:06:01.949 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:06:01.949 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:06:01.949 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:06:01.949 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:06:01.949 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:06:01.949 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:06:01.949 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:06:01.949 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:06:01.949 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:06:01.949 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:06:01.949 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:06:01.949 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:06:01.949 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:06:01.949 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:06:01.949 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:06:01.949 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:06:01.949 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:06:01.949 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:06:01.949 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:06:01.949 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:06:01.949 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:06:01.949 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:01.949 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:01.949 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:06:01.949 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:01.949 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:01.949 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:06:01.949 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:06:01.949 element at address: 0x20000085e580 with size: 0.000183 MiB 00:06:01.949 element at address: 0x20000087e840 with size: 0.000183 MiB 00:06:01.949 element at address: 0x20000087e900 with size: 0.000183 MiB 00:06:01.949 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:06:01.949 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:06:01.949 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:06:01.949 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:06:01.949 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:06:01.949 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:06:01.949 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:06:01.949 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:06:01.949 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:06:01.949 element at address: 0x20000087f080 with size: 0.000183 MiB 00:06:01.949 element at address: 0x20000087f140 with size: 0.000183 MiB 00:06:01.949 element at address: 0x20000087f200 with size: 0.000183 MiB 00:06:01.949 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:06:01.949 element at address: 0x20000087f380 with size: 0.000183 MiB 00:06:01.949 element at address: 0x20000087f440 with size: 0.000183 MiB 00:06:01.949 element at address: 0x20000087f500 with size: 0.000183 MiB 00:06:01.949 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:01.949 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:01.949 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:01.949 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:01.949 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:01.949 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:01.949 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:06:01.949 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:06:01.949 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:06:01.949 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:01.950 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:06:01.950 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:06:01.950 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:06:01.950 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae91540 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae91600 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae916c0 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae91780 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:06:01.950 element at address: 0x200028265500 with size: 0.000183 MiB 00:06:01.950 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826c1c0 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826c3c0 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826c480 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826c540 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826c600 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826c780 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826c840 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826c900 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826d080 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826d140 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826d200 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826d380 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826d440 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826d500 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826d680 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826d740 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826d800 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826d980 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826da40 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826db00 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826de00 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826df80 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826e040 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826e100 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826e280 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826e340 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826e400 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826e580 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826e640 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826e700 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826e880 with size: 0.000183 MiB 00:06:01.950 element at address: 0x20002826e940 with size: 0.000183 MiB 00:06:01.951 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:06:01.951 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:06:01.951 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:06:01.951 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:06:01.951 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:06:01.951 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:06:01.951 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:06:01.951 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:06:01.951 element at address: 0x20002826f000 with size: 0.000183 MiB 00:06:01.951 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:06:01.951 element at address: 0x20002826f180 with size: 0.000183 MiB 00:06:01.951 element at address: 0x20002826f240 with size: 0.000183 MiB 00:06:01.951 element at address: 0x20002826f300 with size: 0.000183 MiB 00:06:01.951 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:06:01.951 element at address: 0x20002826f480 with size: 0.000183 MiB 00:06:01.951 element at address: 0x20002826f540 with size: 0.000183 MiB 00:06:01.951 element at address: 0x20002826f600 with size: 0.000183 MiB 00:06:01.951 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:06:01.951 element at address: 0x20002826f780 with size: 0.000183 MiB 00:06:01.951 element at address: 0x20002826f840 with size: 0.000183 MiB 00:06:01.951 element at address: 0x20002826f900 with size: 0.000183 MiB 00:06:01.951 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:06:01.951 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:06:01.951 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:06:01.951 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:06:01.951 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:06:01.951 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:06:01.951 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:06:01.951 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:06:01.951 list of memzone associated elements. size: 607.928894 MiB 00:06:01.951 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:06:01.951 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:01.951 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:06:01.951 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:01.951 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:06:01.951 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_69914_0 00:06:01.951 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:01.951 associated memzone info: size: 48.002930 MiB name: MP_msgpool_69914_0 00:06:01.951 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:01.951 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_69914_0 00:06:01.951 element at address: 0x2000199be940 with size: 20.255554 MiB 00:06:01.951 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:01.951 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:06:01.951 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:01.951 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:01.951 associated memzone info: size: 3.000122 MiB name: MP_evtpool_69914_0 00:06:01.951 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:01.951 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_69914 00:06:01.951 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:01.951 associated memzone info: size: 1.007996 MiB name: MP_evtpool_69914 00:06:01.951 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:01.951 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:01.951 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:06:01.951 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:01.951 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:01.951 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:01.951 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:01.951 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:01.951 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:01.951 associated memzone info: size: 1.000366 MiB name: RG_ring_0_69914 00:06:01.951 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:01.951 associated memzone info: size: 1.000366 MiB name: RG_ring_1_69914 00:06:01.951 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:06:01.951 associated memzone info: size: 1.000366 MiB name: RG_ring_4_69914 00:06:01.951 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:06:01.951 associated memzone info: size: 1.000366 MiB name: RG_ring_5_69914 00:06:01.951 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:01.951 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_69914 00:06:01.951 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:01.951 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_69914 00:06:01.951 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:01.951 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:01.951 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:01.951 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:01.951 element at address: 0x20001987c540 with size: 0.250488 MiB 00:06:01.951 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:01.951 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:01.951 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_69914 00:06:01.951 element at address: 0x20000085e640 with size: 0.125488 MiB 00:06:01.951 associated memzone info: size: 0.125366 MiB name: RG_ring_2_69914 00:06:01.951 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:01.951 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:01.951 element at address: 0x200028265680 with size: 0.023743 MiB 00:06:01.951 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:01.951 element at address: 0x20000085a380 with size: 0.016113 MiB 00:06:01.951 associated memzone info: size: 0.015991 MiB name: RG_ring_3_69914 00:06:01.951 element at address: 0x20002826b7c0 with size: 0.002441 MiB 00:06:01.951 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:01.951 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:06:01.951 associated memzone info: size: 0.000183 MiB name: MP_msgpool_69914 00:06:01.951 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:01.951 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_69914 00:06:01.951 element at address: 0x20000085a180 with size: 0.000305 MiB 00:06:01.951 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_69914 00:06:01.951 element at address: 0x20002826c280 with size: 0.000305 MiB 00:06:01.951 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:01.951 21:38:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:01.951 21:38:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 69914 00:06:01.951 21:38:25 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 69914 ']' 00:06:01.951 21:38:25 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 69914 00:06:01.951 21:38:25 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:01.951 21:38:25 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.951 21:38:25 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69914 00:06:01.951 21:38:25 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.951 21:38:25 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.951 21:38:25 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69914' 00:06:01.951 killing process with pid 69914 00:06:01.951 21:38:25 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 69914 00:06:01.952 21:38:25 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 69914 00:06:02.211 00:06:02.211 real 0m1.428s 00:06:02.211 user 0m1.495s 00:06:02.211 sys 0m0.333s 00:06:02.211 21:38:25 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.211 21:38:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:02.211 ************************************ 00:06:02.211 END TEST dpdk_mem_utility 00:06:02.211 ************************************ 00:06:02.211 21:38:25 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:02.211 21:38:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.211 21:38:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.211 21:38:25 -- common/autotest_common.sh@10 -- # set +x 00:06:02.472 ************************************ 00:06:02.472 START TEST event 00:06:02.472 ************************************ 00:06:02.472 21:38:25 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:02.472 * Looking for test storage... 00:06:02.472 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:02.472 21:38:25 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:02.472 21:38:25 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:02.472 21:38:25 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:02.472 21:38:25 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:02.472 21:38:25 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.472 21:38:25 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.472 21:38:25 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.472 21:38:25 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.472 21:38:25 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.472 21:38:25 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.472 21:38:25 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.472 21:38:25 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.472 21:38:25 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.472 21:38:25 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.472 21:38:25 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.472 21:38:25 event -- scripts/common.sh@344 -- # case "$op" in 00:06:02.472 21:38:25 event -- scripts/common.sh@345 -- # : 1 00:06:02.472 21:38:25 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.472 21:38:25 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.472 21:38:25 event -- scripts/common.sh@365 -- # decimal 1 00:06:02.472 21:38:25 event -- scripts/common.sh@353 -- # local d=1 00:06:02.472 21:38:25 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.472 21:38:25 event -- scripts/common.sh@355 -- # echo 1 00:06:02.472 21:38:25 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.472 21:38:25 event -- scripts/common.sh@366 -- # decimal 2 00:06:02.472 21:38:25 event -- scripts/common.sh@353 -- # local d=2 00:06:02.472 21:38:25 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.472 21:38:25 event -- scripts/common.sh@355 -- # echo 2 00:06:02.472 21:38:25 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.472 21:38:25 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.472 21:38:25 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.472 21:38:25 event -- scripts/common.sh@368 -- # return 0 00:06:02.472 21:38:25 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.472 21:38:25 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:02.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.472 --rc genhtml_branch_coverage=1 00:06:02.472 --rc genhtml_function_coverage=1 00:06:02.472 --rc genhtml_legend=1 00:06:02.472 --rc geninfo_all_blocks=1 00:06:02.472 --rc geninfo_unexecuted_blocks=1 00:06:02.472 00:06:02.472 ' 00:06:02.472 21:38:25 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:02.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.472 --rc genhtml_branch_coverage=1 00:06:02.472 --rc genhtml_function_coverage=1 00:06:02.472 --rc genhtml_legend=1 00:06:02.472 --rc geninfo_all_blocks=1 00:06:02.472 --rc geninfo_unexecuted_blocks=1 00:06:02.472 00:06:02.472 ' 00:06:02.472 21:38:25 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:02.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.472 --rc genhtml_branch_coverage=1 00:06:02.472 --rc genhtml_function_coverage=1 00:06:02.472 --rc genhtml_legend=1 00:06:02.472 --rc geninfo_all_blocks=1 00:06:02.472 --rc geninfo_unexecuted_blocks=1 00:06:02.472 00:06:02.472 ' 00:06:02.472 21:38:25 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:02.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.472 --rc genhtml_branch_coverage=1 00:06:02.472 --rc genhtml_function_coverage=1 00:06:02.472 --rc genhtml_legend=1 00:06:02.472 --rc geninfo_all_blocks=1 00:06:02.472 --rc geninfo_unexecuted_blocks=1 00:06:02.472 00:06:02.472 ' 00:06:02.472 21:38:25 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:02.472 21:38:25 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:02.472 21:38:25 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:02.472 21:38:25 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:02.472 21:38:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.472 21:38:25 event -- common/autotest_common.sh@10 -- # set +x 00:06:02.472 ************************************ 00:06:02.472 START TEST event_perf 00:06:02.472 ************************************ 00:06:02.472 21:38:25 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:02.472 Running I/O for 1 seconds...[2024-11-27 21:38:25.518246] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:02.473 [2024-11-27 21:38:25.518456] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69994 ] 00:06:02.731 [2024-11-27 21:38:25.664322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:02.731 [2024-11-27 21:38:25.698220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.731 [2024-11-27 21:38:25.698589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.731 [2024-11-27 21:38:25.699393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:02.731 Running I/O for 1 seconds...[2024-11-27 21:38:25.699434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.668 00:06:03.668 lcore 0: 188867 00:06:03.668 lcore 1: 188867 00:06:03.668 lcore 2: 188868 00:06:03.668 lcore 3: 188868 00:06:03.668 done. 00:06:03.668 00:06:03.668 real 0m1.246s 00:06:03.668 user 0m4.058s 00:06:03.668 sys 0m0.072s 00:06:03.668 21:38:26 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.668 21:38:26 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:03.668 ************************************ 00:06:03.668 END TEST event_perf 00:06:03.668 ************************************ 00:06:03.668 21:38:26 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:03.668 21:38:26 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:03.668 21:38:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.668 21:38:26 event -- common/autotest_common.sh@10 -- # set +x 00:06:03.668 ************************************ 00:06:03.668 START TEST event_reactor 00:06:03.668 ************************************ 00:06:03.668 21:38:26 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:03.927 [2024-11-27 21:38:26.806278] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:03.927 [2024-11-27 21:38:26.806461] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70034 ] 00:06:03.927 [2024-11-27 21:38:26.947476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.927 [2024-11-27 21:38:26.964759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.305 test_start 00:06:05.305 oneshot 00:06:05.305 tick 100 00:06:05.305 tick 100 00:06:05.305 tick 250 00:06:05.305 tick 100 00:06:05.305 tick 100 00:06:05.305 tick 100 00:06:05.305 tick 250 00:06:05.305 tick 500 00:06:05.305 tick 100 00:06:05.305 tick 100 00:06:05.305 tick 250 00:06:05.305 tick 100 00:06:05.305 tick 100 00:06:05.305 test_end 00:06:05.305 ************************************ 00:06:05.305 END TEST event_reactor 00:06:05.305 ************************************ 00:06:05.305 00:06:05.305 real 0m1.224s 00:06:05.305 user 0m1.067s 00:06:05.305 sys 0m0.050s 00:06:05.305 21:38:28 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.305 21:38:28 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:05.305 21:38:28 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:05.305 21:38:28 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:05.305 21:38:28 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.305 21:38:28 event -- common/autotest_common.sh@10 -- # set +x 00:06:05.305 ************************************ 00:06:05.305 START TEST event_reactor_perf 00:06:05.305 ************************************ 00:06:05.305 21:38:28 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:05.306 [2024-11-27 21:38:28.092252] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:05.306 [2024-11-27 21:38:28.092513] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70065 ] 00:06:05.306 [2024-11-27 21:38:28.234533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.306 [2024-11-27 21:38:28.252828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.282 test_start 00:06:06.282 test_end 00:06:06.282 Performance: 315912 events per second 00:06:06.282 00:06:06.282 real 0m1.225s 00:06:06.282 user 0m1.073s 00:06:06.282 sys 0m0.045s 00:06:06.282 21:38:29 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.282 ************************************ 00:06:06.282 END TEST event_reactor_perf 00:06:06.282 ************************************ 00:06:06.282 21:38:29 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:06.282 21:38:29 event -- event/event.sh@49 -- # uname -s 00:06:06.282 21:38:29 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:06.282 21:38:29 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:06.282 21:38:29 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.282 21:38:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.282 21:38:29 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.282 ************************************ 00:06:06.282 START TEST event_scheduler 00:06:06.282 ************************************ 00:06:06.282 21:38:29 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:06.544 * Looking for test storage... 00:06:06.544 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:06.544 21:38:29 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:06.544 21:38:29 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:06.544 21:38:29 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:06.544 21:38:29 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:06.544 21:38:29 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.544 21:38:29 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.544 21:38:29 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.544 21:38:29 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.544 21:38:29 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.544 21:38:29 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.544 21:38:29 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.544 21:38:29 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.544 21:38:29 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.544 21:38:29 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.544 21:38:29 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.544 21:38:29 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:06.544 21:38:29 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:06.544 21:38:29 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.544 21:38:29 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.544 21:38:29 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:06.544 21:38:29 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:06.544 21:38:29 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.544 21:38:29 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:06.544 21:38:29 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.544 21:38:29 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:06.544 21:38:29 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:06.544 21:38:29 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.544 21:38:29 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:06.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.544 21:38:29 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.544 21:38:29 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.544 21:38:29 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.544 21:38:29 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:06.544 21:38:29 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.544 21:38:29 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:06.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.544 --rc genhtml_branch_coverage=1 00:06:06.544 --rc genhtml_function_coverage=1 00:06:06.544 --rc genhtml_legend=1 00:06:06.544 --rc geninfo_all_blocks=1 00:06:06.544 --rc geninfo_unexecuted_blocks=1 00:06:06.544 00:06:06.544 ' 00:06:06.544 21:38:29 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:06.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.544 --rc genhtml_branch_coverage=1 00:06:06.544 --rc genhtml_function_coverage=1 00:06:06.544 --rc genhtml_legend=1 00:06:06.544 --rc geninfo_all_blocks=1 00:06:06.544 --rc geninfo_unexecuted_blocks=1 00:06:06.544 00:06:06.544 ' 00:06:06.544 21:38:29 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:06.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.544 --rc genhtml_branch_coverage=1 00:06:06.544 --rc genhtml_function_coverage=1 00:06:06.544 --rc genhtml_legend=1 00:06:06.544 --rc geninfo_all_blocks=1 00:06:06.544 --rc geninfo_unexecuted_blocks=1 00:06:06.544 00:06:06.544 ' 00:06:06.544 21:38:29 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:06.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.544 --rc genhtml_branch_coverage=1 00:06:06.544 --rc genhtml_function_coverage=1 00:06:06.544 --rc genhtml_legend=1 00:06:06.544 --rc geninfo_all_blocks=1 00:06:06.544 --rc geninfo_unexecuted_blocks=1 00:06:06.544 00:06:06.544 ' 00:06:06.544 21:38:29 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:06.544 21:38:29 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70135 00:06:06.544 21:38:29 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:06.544 21:38:29 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70135 00:06:06.544 21:38:29 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 70135 ']' 00:06:06.544 21:38:29 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.544 21:38:29 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.544 21:38:29 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.544 21:38:29 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.544 21:38:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:06.544 21:38:29 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:06.544 [2024-11-27 21:38:29.558581] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:06.544 [2024-11-27 21:38:29.558695] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70135 ] 00:06:06.803 [2024-11-27 21:38:29.703312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:06.803 [2024-11-27 21:38:29.725685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.803 [2024-11-27 21:38:29.725914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.803 [2024-11-27 21:38:29.726384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:06.803 [2024-11-27 21:38:29.726428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.372 21:38:30 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.372 21:38:30 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:07.372 21:38:30 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:07.372 21:38:30 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.372 21:38:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:07.372 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:07.372 POWER: Cannot set governor of lcore 0 to userspace 00:06:07.372 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:07.372 POWER: Cannot set governor of lcore 0 to performance 00:06:07.372 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:07.372 POWER: Cannot set governor of lcore 0 to userspace 00:06:07.372 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:07.372 POWER: Unable to set Power Management Environment for lcore 0 00:06:07.372 [2024-11-27 21:38:30.431921] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:06:07.372 [2024-11-27 21:38:30.431939] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:06:07.372 [2024-11-27 21:38:30.431968] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:07.372 [2024-11-27 21:38:30.431993] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:07.372 [2024-11-27 21:38:30.432010] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:07.372 [2024-11-27 21:38:30.432019] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:07.372 21:38:30 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.372 21:38:30 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:07.372 21:38:30 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.372 21:38:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:07.372 [2024-11-27 21:38:30.489013] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:07.633 21:38:30 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.633 21:38:30 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:07.633 21:38:30 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.633 21:38:30 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.633 21:38:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:07.633 ************************************ 00:06:07.633 START TEST scheduler_create_thread 00:06:07.633 ************************************ 00:06:07.633 21:38:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:07.633 21:38:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:07.633 21:38:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.633 21:38:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.633 2 00:06:07.633 21:38:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.633 21:38:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:07.633 21:38:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.633 21:38:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.634 3 00:06:07.634 21:38:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.634 21:38:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:07.634 21:38:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.634 21:38:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.634 4 00:06:07.634 21:38:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.634 21:38:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:07.634 21:38:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.634 21:38:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.634 5 00:06:07.634 21:38:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.634 21:38:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:07.634 21:38:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.634 21:38:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.634 6 00:06:07.634 21:38:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.634 21:38:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:07.634 21:38:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.634 21:38:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.634 7 00:06:07.634 21:38:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.634 21:38:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:07.634 21:38:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.634 21:38:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.634 8 00:06:07.634 21:38:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.634 21:38:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:07.634 21:38:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.634 21:38:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.634 9 00:06:07.634 21:38:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.634 21:38:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:07.634 21:38:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.634 21:38:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.634 10 00:06:07.634 21:38:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.634 21:38:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:07.634 21:38:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.634 21:38:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.634 21:38:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.634 21:38:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:07.634 21:38:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:07.634 21:38:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.634 21:38:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.634 21:38:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.634 21:38:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:07.634 21:38:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.634 21:38:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.634 21:38:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.634 21:38:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:07.634 21:38:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:07.634 21:38:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.634 21:38:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.206 ************************************ 00:06:08.206 END TEST scheduler_create_thread 00:06:08.206 ************************************ 00:06:08.206 21:38:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.206 00:06:08.206 real 0m0.591s 00:06:08.206 user 0m0.009s 00:06:08.206 sys 0m0.008s 00:06:08.206 21:38:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.206 21:38:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.206 21:38:31 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:08.206 21:38:31 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70135 00:06:08.206 21:38:31 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 70135 ']' 00:06:08.206 21:38:31 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 70135 00:06:08.206 21:38:31 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:08.206 21:38:31 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.206 21:38:31 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70135 00:06:08.206 killing process with pid 70135 00:06:08.206 21:38:31 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:08.206 21:38:31 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:08.206 21:38:31 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70135' 00:06:08.206 21:38:31 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 70135 00:06:08.206 21:38:31 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 70135 00:06:08.466 [2024-11-27 21:38:31.574608] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:08.726 ************************************ 00:06:08.726 END TEST event_scheduler 00:06:08.726 ************************************ 00:06:08.726 00:06:08.726 real 0m2.358s 00:06:08.726 user 0m4.711s 00:06:08.726 sys 0m0.322s 00:06:08.726 21:38:31 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.726 21:38:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:08.726 21:38:31 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:08.726 21:38:31 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:08.726 21:38:31 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.726 21:38:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.727 21:38:31 event -- common/autotest_common.sh@10 -- # set +x 00:06:08.727 ************************************ 00:06:08.727 START TEST app_repeat 00:06:08.727 ************************************ 00:06:08.727 21:38:31 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:08.727 21:38:31 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.727 21:38:31 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.727 21:38:31 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:08.727 21:38:31 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.727 21:38:31 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:08.727 21:38:31 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:08.727 21:38:31 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:08.727 Process app_repeat pid: 70214 00:06:08.727 spdk_app_start Round 0 00:06:08.727 21:38:31 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70214 00:06:08.727 21:38:31 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:08.727 21:38:31 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70214' 00:06:08.727 21:38:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:08.727 21:38:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:08.727 21:38:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70214 /var/tmp/spdk-nbd.sock 00:06:08.727 21:38:31 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:08.727 21:38:31 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70214 ']' 00:06:08.727 21:38:31 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:08.727 21:38:31 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:08.727 21:38:31 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:08.727 21:38:31 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.727 21:38:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:08.727 [2024-11-27 21:38:31.809772] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:08.727 [2024-11-27 21:38:31.809884] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70214 ] 00:06:08.987 [2024-11-27 21:38:31.955556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:08.987 [2024-11-27 21:38:31.975906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.987 [2024-11-27 21:38:31.975945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.926 21:38:32 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.926 21:38:32 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:09.926 21:38:32 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:09.926 Malloc0 00:06:09.926 21:38:32 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.186 Malloc1 00:06:10.186 21:38:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.186 21:38:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.186 21:38:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.186 21:38:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:10.186 21:38:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.186 21:38:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:10.186 21:38:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.186 21:38:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.186 21:38:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.186 21:38:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:10.186 21:38:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.186 21:38:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:10.186 21:38:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:10.186 21:38:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:10.186 21:38:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.186 21:38:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:10.445 /dev/nbd0 00:06:10.445 21:38:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:10.445 21:38:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:10.445 21:38:33 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:10.445 21:38:33 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:10.445 21:38:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:10.445 21:38:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:10.445 21:38:33 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:10.445 21:38:33 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:10.445 21:38:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:10.445 21:38:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:10.445 21:38:33 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.445 1+0 records in 00:06:10.445 1+0 records out 00:06:10.445 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000409958 s, 10.0 MB/s 00:06:10.445 21:38:33 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.445 21:38:33 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:10.445 21:38:33 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.445 21:38:33 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:10.445 21:38:33 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:10.446 21:38:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.446 21:38:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.446 21:38:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:10.703 /dev/nbd1 00:06:10.703 21:38:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:10.703 21:38:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:10.703 21:38:33 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:10.703 21:38:33 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:10.703 21:38:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:10.703 21:38:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:10.703 21:38:33 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:10.703 21:38:33 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:10.703 21:38:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:10.703 21:38:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:10.703 21:38:33 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.703 1+0 records in 00:06:10.703 1+0 records out 00:06:10.703 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000417047 s, 9.8 MB/s 00:06:10.703 21:38:33 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.703 21:38:33 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:10.703 21:38:33 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.703 21:38:33 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:10.703 21:38:33 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:10.703 21:38:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.703 21:38:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.703 21:38:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.703 21:38:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.703 21:38:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.961 21:38:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:10.961 { 00:06:10.961 "nbd_device": "/dev/nbd0", 00:06:10.961 "bdev_name": "Malloc0" 00:06:10.961 }, 00:06:10.961 { 00:06:10.961 "nbd_device": "/dev/nbd1", 00:06:10.961 "bdev_name": "Malloc1" 00:06:10.961 } 00:06:10.961 ]' 00:06:10.961 21:38:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:10.961 { 00:06:10.961 "nbd_device": "/dev/nbd0", 00:06:10.961 "bdev_name": "Malloc0" 00:06:10.961 }, 00:06:10.961 { 00:06:10.961 "nbd_device": "/dev/nbd1", 00:06:10.961 "bdev_name": "Malloc1" 00:06:10.961 } 00:06:10.961 ]' 00:06:10.961 21:38:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.961 21:38:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:10.961 /dev/nbd1' 00:06:10.961 21:38:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:10.961 /dev/nbd1' 00:06:10.961 21:38:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.961 21:38:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:10.961 21:38:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:10.961 21:38:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:10.961 21:38:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:10.961 21:38:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:10.961 21:38:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.961 21:38:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:10.961 21:38:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:10.961 21:38:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:10.961 21:38:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:10.961 21:38:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:10.961 256+0 records in 00:06:10.961 256+0 records out 00:06:10.961 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0109114 s, 96.1 MB/s 00:06:10.961 21:38:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:10.961 21:38:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:10.961 256+0 records in 00:06:10.961 256+0 records out 00:06:10.961 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0199617 s, 52.5 MB/s 00:06:10.961 21:38:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:10.961 21:38:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:10.961 256+0 records in 00:06:10.961 256+0 records out 00:06:10.961 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0194889 s, 53.8 MB/s 00:06:10.961 21:38:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:10.961 21:38:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.961 21:38:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:10.961 21:38:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:10.961 21:38:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:10.961 21:38:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:10.961 21:38:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:10.961 21:38:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:10.961 21:38:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:10.961 21:38:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:10.961 21:38:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:10.961 21:38:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:10.961 21:38:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:10.961 21:38:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.961 21:38:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.961 21:38:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:10.961 21:38:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:10.961 21:38:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.961 21:38:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:11.218 21:38:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:11.218 21:38:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:11.218 21:38:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:11.218 21:38:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.218 21:38:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.218 21:38:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:11.218 21:38:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:11.218 21:38:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.218 21:38:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.218 21:38:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:11.475 21:38:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:11.475 21:38:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:11.475 21:38:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:11.475 21:38:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.475 21:38:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.475 21:38:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:11.475 21:38:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:11.475 21:38:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.476 21:38:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.476 21:38:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.476 21:38:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.734 21:38:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:11.734 21:38:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:11.734 21:38:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.734 21:38:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:11.734 21:38:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:11.734 21:38:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.734 21:38:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:11.734 21:38:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:11.734 21:38:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:11.734 21:38:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:11.734 21:38:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:11.734 21:38:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:11.734 21:38:34 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:11.734 21:38:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:11.991 [2024-11-27 21:38:34.922749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:11.991 [2024-11-27 21:38:34.938536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.991 [2024-11-27 21:38:34.938538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.991 [2024-11-27 21:38:34.968201] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:11.991 [2024-11-27 21:38:34.968246] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:15.299 spdk_app_start Round 1 00:06:15.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:15.300 21:38:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:15.300 21:38:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:15.300 21:38:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70214 /var/tmp/spdk-nbd.sock 00:06:15.300 21:38:37 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70214 ']' 00:06:15.300 21:38:37 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:15.300 21:38:37 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.300 21:38:37 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:15.300 21:38:37 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.300 21:38:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:15.300 21:38:38 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.300 21:38:38 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:15.300 21:38:38 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.300 Malloc0 00:06:15.300 21:38:38 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.559 Malloc1 00:06:15.559 21:38:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.559 21:38:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.559 21:38:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.559 21:38:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:15.559 21:38:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.559 21:38:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:15.559 21:38:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.559 21:38:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.559 21:38:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.559 21:38:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:15.559 21:38:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.559 21:38:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:15.559 21:38:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:15.559 21:38:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:15.559 21:38:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.559 21:38:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:15.559 /dev/nbd0 00:06:15.817 21:38:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:15.817 21:38:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:15.817 21:38:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:15.817 21:38:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:15.817 21:38:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:15.817 21:38:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:15.818 21:38:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:15.818 21:38:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:15.818 21:38:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:15.818 21:38:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:15.818 21:38:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:15.818 1+0 records in 00:06:15.818 1+0 records out 00:06:15.818 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000433826 s, 9.4 MB/s 00:06:15.818 21:38:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.818 21:38:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:15.818 21:38:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.818 21:38:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:15.818 21:38:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:15.818 21:38:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.818 21:38:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.818 21:38:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:15.818 /dev/nbd1 00:06:15.818 21:38:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:15.818 21:38:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:15.818 21:38:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:15.818 21:38:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:15.818 21:38:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:15.818 21:38:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:15.818 21:38:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:15.818 21:38:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:15.818 21:38:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:15.818 21:38:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:15.818 21:38:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:15.818 1+0 records in 00:06:15.818 1+0 records out 00:06:15.818 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213737 s, 19.2 MB/s 00:06:15.818 21:38:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.818 21:38:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:15.818 21:38:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.077 21:38:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:16.077 21:38:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:16.077 21:38:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.077 21:38:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.077 21:38:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.077 21:38:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.077 21:38:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.077 21:38:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:16.077 { 00:06:16.077 "nbd_device": "/dev/nbd0", 00:06:16.077 "bdev_name": "Malloc0" 00:06:16.077 }, 00:06:16.077 { 00:06:16.077 "nbd_device": "/dev/nbd1", 00:06:16.077 "bdev_name": "Malloc1" 00:06:16.077 } 00:06:16.077 ]' 00:06:16.077 21:38:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:16.077 { 00:06:16.077 "nbd_device": "/dev/nbd0", 00:06:16.077 "bdev_name": "Malloc0" 00:06:16.077 }, 00:06:16.077 { 00:06:16.077 "nbd_device": "/dev/nbd1", 00:06:16.077 "bdev_name": "Malloc1" 00:06:16.077 } 00:06:16.077 ]' 00:06:16.077 21:38:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.077 21:38:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:16.077 /dev/nbd1' 00:06:16.077 21:38:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:16.077 /dev/nbd1' 00:06:16.077 21:38:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.077 21:38:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:16.077 21:38:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:16.077 21:38:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:16.077 21:38:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:16.077 21:38:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:16.077 21:38:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.077 21:38:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.077 21:38:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:16.077 21:38:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:16.077 21:38:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:16.077 21:38:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:16.077 256+0 records in 00:06:16.077 256+0 records out 00:06:16.077 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00721752 s, 145 MB/s 00:06:16.077 21:38:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.077 21:38:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:16.335 256+0 records in 00:06:16.335 256+0 records out 00:06:16.335 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150936 s, 69.5 MB/s 00:06:16.335 21:38:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.335 21:38:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:16.335 256+0 records in 00:06:16.335 256+0 records out 00:06:16.335 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166919 s, 62.8 MB/s 00:06:16.335 21:38:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:16.335 21:38:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.335 21:38:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.335 21:38:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:16.335 21:38:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:16.335 21:38:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:16.335 21:38:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:16.335 21:38:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.335 21:38:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:16.335 21:38:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.335 21:38:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:16.335 21:38:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:16.335 21:38:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:16.335 21:38:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.335 21:38:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.335 21:38:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:16.335 21:38:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:16.335 21:38:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.335 21:38:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:16.335 21:38:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:16.593 21:38:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:16.593 21:38:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:16.593 21:38:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.593 21:38:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.593 21:38:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:16.593 21:38:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:16.593 21:38:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.593 21:38:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.593 21:38:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:16.593 21:38:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:16.594 21:38:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:16.594 21:38:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:16.594 21:38:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.594 21:38:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.594 21:38:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:16.594 21:38:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:16.594 21:38:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.594 21:38:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.594 21:38:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.594 21:38:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.852 21:38:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:16.852 21:38:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:16.852 21:38:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.852 21:38:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:16.852 21:38:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.852 21:38:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:16.852 21:38:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:16.852 21:38:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:16.852 21:38:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:16.852 21:38:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:16.852 21:38:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:16.852 21:38:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:16.852 21:38:39 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:17.110 21:38:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:17.110 [2024-11-27 21:38:40.176034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:17.110 [2024-11-27 21:38:40.191885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.110 [2024-11-27 21:38:40.191888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.110 [2024-11-27 21:38:40.221222] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:17.110 [2024-11-27 21:38:40.221265] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:20.391 21:38:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:20.391 spdk_app_start Round 2 00:06:20.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:20.391 21:38:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:20.391 21:38:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70214 /var/tmp/spdk-nbd.sock 00:06:20.391 21:38:43 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70214 ']' 00:06:20.391 21:38:43 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:20.391 21:38:43 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.391 21:38:43 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:20.391 21:38:43 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.391 21:38:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:20.391 21:38:43 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.391 21:38:43 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:20.391 21:38:43 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:20.391 Malloc0 00:06:20.649 21:38:43 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:20.649 Malloc1 00:06:20.649 21:38:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:20.649 21:38:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.649 21:38:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.649 21:38:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:20.649 21:38:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.649 21:38:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:20.649 21:38:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:20.649 21:38:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.649 21:38:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.649 21:38:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:20.649 21:38:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.649 21:38:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:20.649 21:38:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:20.649 21:38:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:20.649 21:38:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.649 21:38:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:20.908 /dev/nbd0 00:06:20.908 21:38:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:20.908 21:38:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:20.908 21:38:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:20.908 21:38:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:20.908 21:38:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:20.908 21:38:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:20.908 21:38:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:20.908 21:38:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:20.908 21:38:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:20.908 21:38:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:20.908 21:38:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:20.908 1+0 records in 00:06:20.908 1+0 records out 00:06:20.908 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000167323 s, 24.5 MB/s 00:06:20.908 21:38:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:20.908 21:38:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:20.908 21:38:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:20.908 21:38:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:20.908 21:38:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:20.908 21:38:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.908 21:38:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.908 21:38:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:21.167 /dev/nbd1 00:06:21.167 21:38:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:21.167 21:38:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:21.167 21:38:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:21.167 21:38:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:21.167 21:38:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:21.167 21:38:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:21.167 21:38:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:21.167 21:38:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:21.167 21:38:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:21.167 21:38:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:21.167 21:38:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:21.167 1+0 records in 00:06:21.167 1+0 records out 00:06:21.167 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000204154 s, 20.1 MB/s 00:06:21.167 21:38:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.167 21:38:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:21.167 21:38:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:21.167 21:38:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:21.167 21:38:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:21.167 21:38:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.167 21:38:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.167 21:38:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.167 21:38:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.167 21:38:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.426 21:38:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:21.426 { 00:06:21.426 "nbd_device": "/dev/nbd0", 00:06:21.426 "bdev_name": "Malloc0" 00:06:21.426 }, 00:06:21.426 { 00:06:21.426 "nbd_device": "/dev/nbd1", 00:06:21.426 "bdev_name": "Malloc1" 00:06:21.426 } 00:06:21.426 ]' 00:06:21.426 21:38:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:21.426 { 00:06:21.426 "nbd_device": "/dev/nbd0", 00:06:21.426 "bdev_name": "Malloc0" 00:06:21.426 }, 00:06:21.426 { 00:06:21.426 "nbd_device": "/dev/nbd1", 00:06:21.426 "bdev_name": "Malloc1" 00:06:21.426 } 00:06:21.426 ]' 00:06:21.426 21:38:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.426 21:38:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:21.426 /dev/nbd1' 00:06:21.426 21:38:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.426 21:38:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:21.426 /dev/nbd1' 00:06:21.426 21:38:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:21.426 21:38:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:21.426 21:38:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:21.426 21:38:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:21.426 21:38:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:21.426 21:38:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.426 21:38:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.426 21:38:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:21.426 21:38:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:21.426 21:38:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:21.426 21:38:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:21.426 256+0 records in 00:06:21.426 256+0 records out 00:06:21.426 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100248 s, 105 MB/s 00:06:21.426 21:38:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.426 21:38:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:21.426 256+0 records in 00:06:21.426 256+0 records out 00:06:21.426 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014975 s, 70.0 MB/s 00:06:21.426 21:38:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.426 21:38:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:21.426 256+0 records in 00:06:21.426 256+0 records out 00:06:21.426 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0161595 s, 64.9 MB/s 00:06:21.426 21:38:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:21.426 21:38:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.426 21:38:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.426 21:38:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:21.426 21:38:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:21.426 21:38:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:21.426 21:38:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:21.427 21:38:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.427 21:38:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:21.427 21:38:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.427 21:38:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:21.427 21:38:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:21.427 21:38:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:21.427 21:38:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.427 21:38:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.427 21:38:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:21.427 21:38:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:21.427 21:38:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.427 21:38:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:21.685 21:38:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:21.685 21:38:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:21.685 21:38:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:21.685 21:38:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.685 21:38:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.685 21:38:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:21.685 21:38:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:21.685 21:38:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.685 21:38:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.685 21:38:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:21.943 21:38:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:21.943 21:38:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:21.943 21:38:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:21.943 21:38:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.943 21:38:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.943 21:38:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:21.943 21:38:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:21.943 21:38:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.943 21:38:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.943 21:38:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.943 21:38:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.201 21:38:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:22.201 21:38:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:22.201 21:38:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.201 21:38:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:22.201 21:38:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:22.201 21:38:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.201 21:38:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:22.201 21:38:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:22.201 21:38:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:22.201 21:38:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:22.201 21:38:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:22.201 21:38:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:22.201 21:38:45 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:22.460 21:38:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:22.460 [2024-11-27 21:38:45.438154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:22.460 [2024-11-27 21:38:45.453538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.460 [2024-11-27 21:38:45.453546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.460 [2024-11-27 21:38:45.482870] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:22.460 [2024-11-27 21:38:45.482913] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:25.811 21:38:48 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70214 /var/tmp/spdk-nbd.sock 00:06:25.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:25.811 21:38:48 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70214 ']' 00:06:25.811 21:38:48 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:25.811 21:38:48 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.811 21:38:48 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:25.811 21:38:48 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.811 21:38:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:25.811 21:38:48 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.811 21:38:48 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:25.811 21:38:48 event.app_repeat -- event/event.sh@39 -- # killprocess 70214 00:06:25.811 21:38:48 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 70214 ']' 00:06:25.811 21:38:48 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 70214 00:06:25.811 21:38:48 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:25.811 21:38:48 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.811 21:38:48 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70214 00:06:25.811 killing process with pid 70214 00:06:25.811 21:38:48 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:25.811 21:38:48 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:25.811 21:38:48 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70214' 00:06:25.811 21:38:48 event.app_repeat -- common/autotest_common.sh@973 -- # kill 70214 00:06:25.811 21:38:48 event.app_repeat -- common/autotest_common.sh@978 -- # wait 70214 00:06:25.811 spdk_app_start is called in Round 0. 00:06:25.811 Shutdown signal received, stop current app iteration 00:06:25.811 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 reinitialization... 00:06:25.811 spdk_app_start is called in Round 1. 00:06:25.811 Shutdown signal received, stop current app iteration 00:06:25.811 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 reinitialization... 00:06:25.811 spdk_app_start is called in Round 2. 00:06:25.811 Shutdown signal received, stop current app iteration 00:06:25.811 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 reinitialization... 00:06:25.811 spdk_app_start is called in Round 3. 00:06:25.811 Shutdown signal received, stop current app iteration 00:06:25.811 21:38:48 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:25.811 21:38:48 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:25.811 00:06:25.811 real 0m16.917s 00:06:25.811 user 0m37.973s 00:06:25.811 sys 0m2.013s 00:06:25.811 21:38:48 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.811 21:38:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:25.811 ************************************ 00:06:25.811 END TEST app_repeat 00:06:25.811 ************************************ 00:06:25.811 21:38:48 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:25.811 21:38:48 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:25.811 21:38:48 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.811 21:38:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.811 21:38:48 event -- common/autotest_common.sh@10 -- # set +x 00:06:25.811 ************************************ 00:06:25.811 START TEST cpu_locks 00:06:25.811 ************************************ 00:06:25.811 21:38:48 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:25.811 * Looking for test storage... 00:06:25.811 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:25.811 21:38:48 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:25.811 21:38:48 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:25.811 21:38:48 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:25.811 21:38:48 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:25.811 21:38:48 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:25.811 21:38:48 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:25.811 21:38:48 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:25.811 21:38:48 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.811 21:38:48 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:25.811 21:38:48 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:25.811 21:38:48 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:25.811 21:38:48 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:25.811 21:38:48 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:25.811 21:38:48 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:25.811 21:38:48 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:25.811 21:38:48 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:25.811 21:38:48 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:25.811 21:38:48 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:25.811 21:38:48 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.811 21:38:48 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:25.811 21:38:48 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:25.811 21:38:48 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.811 21:38:48 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:25.811 21:38:48 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:25.811 21:38:48 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:25.811 21:38:48 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:25.811 21:38:48 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.811 21:38:48 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:25.811 21:38:48 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:25.811 21:38:48 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:25.811 21:38:48 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:25.811 21:38:48 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:25.811 21:38:48 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.811 21:38:48 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:25.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.811 --rc genhtml_branch_coverage=1 00:06:25.811 --rc genhtml_function_coverage=1 00:06:25.811 --rc genhtml_legend=1 00:06:25.811 --rc geninfo_all_blocks=1 00:06:25.811 --rc geninfo_unexecuted_blocks=1 00:06:25.811 00:06:25.811 ' 00:06:25.811 21:38:48 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:25.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.811 --rc genhtml_branch_coverage=1 00:06:25.811 --rc genhtml_function_coverage=1 00:06:25.811 --rc genhtml_legend=1 00:06:25.811 --rc geninfo_all_blocks=1 00:06:25.811 --rc geninfo_unexecuted_blocks=1 00:06:25.811 00:06:25.811 ' 00:06:25.811 21:38:48 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:25.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.811 --rc genhtml_branch_coverage=1 00:06:25.812 --rc genhtml_function_coverage=1 00:06:25.812 --rc genhtml_legend=1 00:06:25.812 --rc geninfo_all_blocks=1 00:06:25.812 --rc geninfo_unexecuted_blocks=1 00:06:25.812 00:06:25.812 ' 00:06:25.812 21:38:48 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:25.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.812 --rc genhtml_branch_coverage=1 00:06:25.812 --rc genhtml_function_coverage=1 00:06:25.812 --rc genhtml_legend=1 00:06:25.812 --rc geninfo_all_blocks=1 00:06:25.812 --rc geninfo_unexecuted_blocks=1 00:06:25.812 00:06:25.812 ' 00:06:25.812 21:38:48 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:25.812 21:38:48 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:25.812 21:38:48 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:25.812 21:38:48 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:25.812 21:38:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.812 21:38:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.812 21:38:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.812 ************************************ 00:06:25.812 START TEST default_locks 00:06:25.812 ************************************ 00:06:25.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.812 21:38:48 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:25.812 21:38:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=70633 00:06:25.812 21:38:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 70633 00:06:25.812 21:38:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:25.812 21:38:48 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 70633 ']' 00:06:25.812 21:38:48 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.812 21:38:48 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.812 21:38:48 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.812 21:38:48 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.812 21:38:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.070 [2024-11-27 21:38:48.987997] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:26.070 [2024-11-27 21:38:48.988152] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70633 ] 00:06:26.070 [2024-11-27 21:38:49.130575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.070 [2024-11-27 21:38:49.154533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.006 21:38:49 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.006 21:38:49 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:27.006 21:38:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 70633 00:06:27.006 21:38:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:27.006 21:38:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 70633 00:06:27.006 21:38:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 70633 00:06:27.006 21:38:49 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 70633 ']' 00:06:27.006 21:38:49 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 70633 00:06:27.006 21:38:49 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:27.006 21:38:49 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.006 21:38:49 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70633 00:06:27.006 killing process with pid 70633 00:06:27.006 21:38:49 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.006 21:38:49 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.006 21:38:49 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70633' 00:06:27.006 21:38:49 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 70633 00:06:27.006 21:38:49 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 70633 00:06:27.268 21:38:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 70633 00:06:27.268 21:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:27.268 21:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 70633 00:06:27.268 21:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:27.268 21:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.268 21:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:27.268 21:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.268 21:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 70633 00:06:27.268 21:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 70633 ']' 00:06:27.268 21:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.268 21:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.268 21:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.268 21:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.268 21:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.268 ERROR: process (pid: 70633) is no longer running 00:06:27.268 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (70633) - No such process 00:06:27.268 21:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.268 21:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:27.268 21:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:27.268 21:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:27.268 21:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:27.268 21:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:27.268 21:38:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:27.268 21:38:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:27.268 21:38:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:27.268 21:38:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:27.268 00:06:27.268 real 0m1.305s 00:06:27.268 user 0m1.343s 00:06:27.268 sys 0m0.400s 00:06:27.268 21:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.268 ************************************ 00:06:27.268 END TEST default_locks 00:06:27.268 ************************************ 00:06:27.268 21:38:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.268 21:38:50 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:27.268 21:38:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.268 21:38:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.268 21:38:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.268 ************************************ 00:06:27.268 START TEST default_locks_via_rpc 00:06:27.268 ************************************ 00:06:27.268 21:38:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:27.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.269 21:38:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=70681 00:06:27.269 21:38:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 70681 00:06:27.269 21:38:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:27.269 21:38:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 70681 ']' 00:06:27.269 21:38:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.269 21:38:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.269 21:38:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.269 21:38:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.269 21:38:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.269 [2024-11-27 21:38:50.339836] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:27.269 [2024-11-27 21:38:50.340101] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70681 ] 00:06:27.529 [2024-11-27 21:38:50.483227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.529 [2024-11-27 21:38:50.506230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.096 21:38:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.096 21:38:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:28.096 21:38:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:28.096 21:38:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.096 21:38:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.096 21:38:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.096 21:38:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:28.096 21:38:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:28.096 21:38:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:28.096 21:38:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:28.096 21:38:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:28.096 21:38:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.096 21:38:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.096 21:38:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.096 21:38:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 70681 00:06:28.096 21:38:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:28.096 21:38:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 70681 00:06:28.355 21:38:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 70681 00:06:28.355 21:38:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 70681 ']' 00:06:28.355 21:38:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 70681 00:06:28.355 21:38:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:28.355 21:38:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.355 21:38:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70681 00:06:28.615 killing process with pid 70681 00:06:28.615 21:38:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:28.615 21:38:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:28.615 21:38:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70681' 00:06:28.615 21:38:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 70681 00:06:28.615 21:38:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 70681 00:06:28.615 00:06:28.615 real 0m1.441s 00:06:28.615 user 0m1.501s 00:06:28.615 sys 0m0.415s 00:06:28.615 ************************************ 00:06:28.615 END TEST default_locks_via_rpc 00:06:28.615 ************************************ 00:06:28.615 21:38:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.615 21:38:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.875 21:38:51 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:28.875 21:38:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.875 21:38:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.875 21:38:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.875 ************************************ 00:06:28.875 START TEST non_locking_app_on_locked_coremask 00:06:28.875 ************************************ 00:06:28.875 21:38:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:28.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.875 21:38:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=70722 00:06:28.875 21:38:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 70722 /var/tmp/spdk.sock 00:06:28.875 21:38:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70722 ']' 00:06:28.875 21:38:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:28.875 21:38:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.875 21:38:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.875 21:38:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.875 21:38:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.875 21:38:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.875 [2024-11-27 21:38:51.833444] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:28.875 [2024-11-27 21:38:51.834019] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70722 ] 00:06:28.875 [2024-11-27 21:38:51.979984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.135 [2024-11-27 21:38:51.999724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.706 21:38:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.706 21:38:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:29.706 21:38:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:29.706 21:38:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=70738 00:06:29.706 21:38:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 70738 /var/tmp/spdk2.sock 00:06:29.706 21:38:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70738 ']' 00:06:29.706 21:38:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.706 21:38:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.706 21:38:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.706 21:38:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.706 21:38:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.706 [2024-11-27 21:38:52.752866] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:29.706 [2024-11-27 21:38:52.753754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70738 ] 00:06:29.968 [2024-11-27 21:38:52.915843] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:29.968 [2024-11-27 21:38:52.915899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.968 [2024-11-27 21:38:52.976748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.541 21:38:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.541 21:38:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:30.541 21:38:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 70722 00:06:30.541 21:38:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70722 00:06:30.541 21:38:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:30.802 21:38:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 70722 00:06:30.802 21:38:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70722 ']' 00:06:30.802 21:38:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 70722 00:06:30.802 21:38:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:31.064 21:38:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:31.064 21:38:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70722 00:06:31.064 killing process with pid 70722 00:06:31.064 21:38:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:31.064 21:38:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:31.064 21:38:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70722' 00:06:31.064 21:38:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 70722 00:06:31.064 21:38:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 70722 00:06:31.640 21:38:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 70738 00:06:31.640 21:38:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70738 ']' 00:06:31.640 21:38:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 70738 00:06:31.640 21:38:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:31.640 21:38:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:31.640 21:38:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70738 00:06:31.640 killing process with pid 70738 00:06:31.640 21:38:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:31.640 21:38:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:31.640 21:38:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70738' 00:06:31.640 21:38:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 70738 00:06:31.640 21:38:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 70738 00:06:32.216 ************************************ 00:06:32.216 END TEST non_locking_app_on_locked_coremask 00:06:32.216 ************************************ 00:06:32.216 00:06:32.216 real 0m3.259s 00:06:32.216 user 0m3.498s 00:06:32.216 sys 0m0.872s 00:06:32.216 21:38:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.216 21:38:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.216 21:38:55 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:32.216 21:38:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:32.216 21:38:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.216 21:38:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.216 ************************************ 00:06:32.216 START TEST locking_app_on_unlocked_coremask 00:06:32.216 ************************************ 00:06:32.216 21:38:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:32.216 21:38:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=70801 00:06:32.216 21:38:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 70801 /var/tmp/spdk.sock 00:06:32.216 21:38:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70801 ']' 00:06:32.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.216 21:38:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.216 21:38:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.216 21:38:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.216 21:38:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:32.216 21:38:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.216 21:38:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.216 [2024-11-27 21:38:55.155565] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:32.216 [2024-11-27 21:38:55.155682] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70801 ] 00:06:32.216 [2024-11-27 21:38:55.300714] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:32.216 [2024-11-27 21:38:55.300867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.216 [2024-11-27 21:38:55.319406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.161 21:38:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.161 21:38:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:33.161 21:38:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=70812 00:06:33.161 21:38:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 70812 /var/tmp/spdk2.sock 00:06:33.161 21:38:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70812 ']' 00:06:33.161 21:38:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.161 21:38:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:33.161 21:38:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.161 21:38:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.161 21:38:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.161 21:38:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.161 [2024-11-27 21:38:56.060912] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:33.161 [2024-11-27 21:38:56.061170] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70812 ] 00:06:33.161 [2024-11-27 21:38:56.221238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.161 [2024-11-27 21:38:56.259974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.106 21:38:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.106 21:38:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:34.106 21:38:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 70812 00:06:34.106 21:38:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70812 00:06:34.106 21:38:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:34.106 21:38:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 70801 00:06:34.106 21:38:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70801 ']' 00:06:34.106 21:38:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 70801 00:06:34.106 21:38:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:34.106 21:38:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.106 21:38:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70801 00:06:34.106 21:38:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.106 killing process with pid 70801 00:06:34.106 21:38:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.106 21:38:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70801' 00:06:34.106 21:38:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 70801 00:06:34.106 21:38:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 70801 00:06:34.673 21:38:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 70812 00:06:34.673 21:38:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70812 ']' 00:06:34.673 21:38:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 70812 00:06:34.673 21:38:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:34.674 21:38:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.674 21:38:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70812 00:06:34.674 21:38:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.674 21:38:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.674 killing process with pid 70812 00:06:34.674 21:38:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70812' 00:06:34.674 21:38:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 70812 00:06:34.674 21:38:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 70812 00:06:34.934 00:06:34.934 real 0m2.780s 00:06:34.934 user 0m3.111s 00:06:34.934 sys 0m0.723s 00:06:34.934 ************************************ 00:06:34.934 END TEST locking_app_on_unlocked_coremask 00:06:34.934 ************************************ 00:06:34.934 21:38:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.934 21:38:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.934 21:38:57 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:34.934 21:38:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:34.934 21:38:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.934 21:38:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.934 ************************************ 00:06:34.934 START TEST locking_app_on_locked_coremask 00:06:34.934 ************************************ 00:06:34.934 21:38:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:34.934 21:38:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=70870 00:06:34.934 21:38:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 70870 /var/tmp/spdk.sock 00:06:34.934 21:38:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70870 ']' 00:06:34.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.934 21:38:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.934 21:38:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.934 21:38:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.934 21:38:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.934 21:38:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.934 21:38:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:34.934 [2024-11-27 21:38:57.992471] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:34.934 [2024-11-27 21:38:57.992591] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70870 ] 00:06:35.195 [2024-11-27 21:38:58.134804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.195 [2024-11-27 21:38:58.155877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.853 21:38:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.853 21:38:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:35.853 21:38:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=70886 00:06:35.853 21:38:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 70886 /var/tmp/spdk2.sock 00:06:35.853 21:38:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:35.853 21:38:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 70886 /var/tmp/spdk2.sock 00:06:35.853 21:38:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:35.853 21:38:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:35.853 21:38:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:35.853 21:38:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:35.853 21:38:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:35.853 21:38:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 70886 /var/tmp/spdk2.sock 00:06:35.853 21:38:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70886 ']' 00:06:35.853 21:38:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:35.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:35.853 21:38:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.853 21:38:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:35.853 21:38:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.853 21:38:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.853 [2024-11-27 21:38:58.892357] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:35.853 [2024-11-27 21:38:58.892481] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70886 ] 00:06:36.114 [2024-11-27 21:38:59.050320] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 70870 has claimed it. 00:06:36.114 [2024-11-27 21:38:59.050386] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:36.684 ERROR: process (pid: 70886) is no longer running 00:06:36.684 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (70886) - No such process 00:06:36.684 21:38:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.684 21:38:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:36.684 21:38:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:36.684 21:38:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:36.684 21:38:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:36.684 21:38:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:36.684 21:38:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 70870 00:06:36.684 21:38:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70870 00:06:36.684 21:38:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:36.684 21:38:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 70870 00:06:36.684 21:38:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70870 ']' 00:06:36.684 21:38:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 70870 00:06:36.684 21:38:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:36.685 21:38:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:36.685 21:38:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70870 00:06:36.685 21:38:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:36.685 killing process with pid 70870 00:06:36.685 21:38:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:36.685 21:38:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70870' 00:06:36.685 21:38:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 70870 00:06:36.685 21:38:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 70870 00:06:36.945 00:06:36.945 real 0m2.035s 00:06:36.945 user 0m2.273s 00:06:36.945 sys 0m0.485s 00:06:36.945 ************************************ 00:06:36.945 END TEST locking_app_on_locked_coremask 00:06:36.945 ************************************ 00:06:36.945 21:38:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.945 21:38:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.945 21:39:00 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:36.945 21:39:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.945 21:39:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.945 21:39:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.945 ************************************ 00:06:36.945 START TEST locking_overlapped_coremask 00:06:36.945 ************************************ 00:06:36.945 21:39:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:36.945 21:39:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=70928 00:06:36.945 21:39:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 70928 /var/tmp/spdk.sock 00:06:36.945 21:39:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 70928 ']' 00:06:36.945 21:39:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.945 21:39:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:36.945 21:39:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.945 21:39:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.945 21:39:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.945 21:39:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.206 [2024-11-27 21:39:00.084456] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:37.206 [2024-11-27 21:39:00.084586] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70928 ] 00:06:37.206 [2024-11-27 21:39:00.231424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:37.206 [2024-11-27 21:39:00.252829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.206 [2024-11-27 21:39:00.253115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.206 [2024-11-27 21:39:00.253152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.150 21:39:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.151 21:39:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:38.151 21:39:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=70946 00:06:38.151 21:39:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 70946 /var/tmp/spdk2.sock 00:06:38.151 21:39:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:38.151 21:39:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 70946 /var/tmp/spdk2.sock 00:06:38.151 21:39:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:38.151 21:39:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:38.151 21:39:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.151 21:39:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:38.151 21:39:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.151 21:39:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 70946 /var/tmp/spdk2.sock 00:06:38.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.151 21:39:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 70946 ']' 00:06:38.151 21:39:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.151 21:39:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.151 21:39:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.151 21:39:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.151 21:39:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.151 [2024-11-27 21:39:00.986179] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:38.151 [2024-11-27 21:39:00.986624] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70946 ] 00:06:38.151 [2024-11-27 21:39:01.144998] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 70928 has claimed it. 00:06:38.151 [2024-11-27 21:39:01.145053] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:38.722 ERROR: process (pid: 70946) is no longer running 00:06:38.722 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (70946) - No such process 00:06:38.722 21:39:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.722 21:39:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:38.722 21:39:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:38.722 21:39:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:38.722 21:39:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:38.722 21:39:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:38.722 21:39:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:38.722 21:39:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:38.722 21:39:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:38.722 21:39:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:38.722 21:39:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 70928 00:06:38.722 21:39:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 70928 ']' 00:06:38.722 21:39:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 70928 00:06:38.722 21:39:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:38.722 21:39:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:38.722 21:39:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70928 00:06:38.722 21:39:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:38.722 killing process with pid 70928 00:06:38.722 21:39:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:38.722 21:39:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70928' 00:06:38.722 21:39:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 70928 00:06:38.722 21:39:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 70928 00:06:38.983 00:06:38.983 real 0m1.890s 00:06:38.983 user 0m5.241s 00:06:38.983 sys 0m0.386s 00:06:38.983 ************************************ 00:06:38.983 END TEST locking_overlapped_coremask 00:06:38.983 ************************************ 00:06:38.983 21:39:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.983 21:39:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.983 21:39:01 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:38.983 21:39:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.983 21:39:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.983 21:39:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.983 ************************************ 00:06:38.983 START TEST locking_overlapped_coremask_via_rpc 00:06:38.983 ************************************ 00:06:38.983 21:39:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:38.983 21:39:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=70988 00:06:38.983 21:39:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 70988 /var/tmp/spdk.sock 00:06:38.983 21:39:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 70988 ']' 00:06:38.983 21:39:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.983 21:39:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.983 21:39:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.983 21:39:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.983 21:39:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:38.983 21:39:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.983 [2024-11-27 21:39:02.037619] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:38.983 [2024-11-27 21:39:02.037731] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70988 ] 00:06:39.243 [2024-11-27 21:39:02.184383] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:39.243 [2024-11-27 21:39:02.184421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:39.243 [2024-11-27 21:39:02.205763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.243 [2024-11-27 21:39:02.205970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.243 [2024-11-27 21:39:02.206038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.816 21:39:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.816 21:39:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:39.816 21:39:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71006 00:06:39.816 21:39:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71006 /var/tmp/spdk2.sock 00:06:39.816 21:39:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71006 ']' 00:06:39.816 21:39:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:39.816 21:39:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:39.816 21:39:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:39.816 21:39:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:39.816 21:39:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.816 21:39:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.816 [2024-11-27 21:39:02.935145] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:39.817 [2024-11-27 21:39:02.935261] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71006 ] 00:06:40.077 [2024-11-27 21:39:03.095224] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:40.077 [2024-11-27 21:39:03.095271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:40.077 [2024-11-27 21:39:03.136160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:40.077 [2024-11-27 21:39:03.139517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.078 [2024-11-27 21:39:03.139620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:41.016 21:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.016 21:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:41.016 21:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:41.016 21:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.016 21:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.016 21:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.016 21:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:41.016 21:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:41.016 21:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:41.016 21:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:41.016 21:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.016 21:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:41.016 21:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.016 21:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:41.016 21:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.016 21:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.016 [2024-11-27 21:39:03.794476] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 70988 has claimed it. 00:06:41.016 request: 00:06:41.016 { 00:06:41.016 "method": "framework_enable_cpumask_locks", 00:06:41.016 "req_id": 1 00:06:41.016 } 00:06:41.016 Got JSON-RPC error response 00:06:41.016 response: 00:06:41.016 { 00:06:41.016 "code": -32603, 00:06:41.016 "message": "Failed to claim CPU core: 2" 00:06:41.016 } 00:06:41.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.016 21:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:41.016 21:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:41.016 21:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:41.016 21:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:41.016 21:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:41.016 21:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 70988 /var/tmp/spdk.sock 00:06:41.016 21:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 70988 ']' 00:06:41.016 21:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.016 21:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.016 21:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.016 21:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.016 21:39:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.016 21:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.016 21:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:41.016 21:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71006 /var/tmp/spdk2.sock 00:06:41.016 21:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71006 ']' 00:06:41.016 21:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.016 21:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.016 21:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.016 21:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.016 21:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.275 21:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.275 ************************************ 00:06:41.275 END TEST locking_overlapped_coremask_via_rpc 00:06:41.275 ************************************ 00:06:41.275 21:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:41.275 21:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:41.275 21:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:41.275 21:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:41.275 21:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:41.275 00:06:41.275 real 0m2.252s 00:06:41.275 user 0m1.054s 00:06:41.275 sys 0m0.137s 00:06:41.275 21:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.275 21:39:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.275 21:39:04 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:41.275 21:39:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 70988 ]] 00:06:41.275 21:39:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 70988 00:06:41.275 21:39:04 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 70988 ']' 00:06:41.275 21:39:04 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 70988 00:06:41.275 21:39:04 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:41.275 21:39:04 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.275 21:39:04 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70988 00:06:41.275 killing process with pid 70988 00:06:41.275 21:39:04 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:41.275 21:39:04 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:41.275 21:39:04 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70988' 00:06:41.275 21:39:04 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 70988 00:06:41.275 21:39:04 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 70988 00:06:41.534 21:39:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71006 ]] 00:06:41.534 21:39:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71006 00:06:41.534 21:39:04 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71006 ']' 00:06:41.534 21:39:04 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71006 00:06:41.534 21:39:04 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:41.534 21:39:04 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.534 21:39:04 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71006 00:06:41.534 killing process with pid 71006 00:06:41.534 21:39:04 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:41.534 21:39:04 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:41.534 21:39:04 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71006' 00:06:41.534 21:39:04 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 71006 00:06:41.534 21:39:04 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 71006 00:06:41.792 21:39:04 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:41.792 Process with pid 70988 is not found 00:06:41.792 21:39:04 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:41.792 21:39:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 70988 ]] 00:06:41.792 21:39:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 70988 00:06:41.792 21:39:04 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 70988 ']' 00:06:41.792 21:39:04 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 70988 00:06:41.792 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (70988) - No such process 00:06:41.792 21:39:04 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 70988 is not found' 00:06:41.793 21:39:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71006 ]] 00:06:41.793 21:39:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71006 00:06:41.793 21:39:04 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71006 ']' 00:06:41.793 21:39:04 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71006 00:06:41.793 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71006) - No such process 00:06:41.793 Process with pid 71006 is not found 00:06:41.793 21:39:04 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 71006 is not found' 00:06:41.793 21:39:04 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:41.793 00:06:41.793 real 0m15.989s 00:06:41.793 user 0m28.067s 00:06:41.793 sys 0m4.148s 00:06:41.793 21:39:04 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.793 ************************************ 00:06:41.793 END TEST cpu_locks 00:06:41.793 ************************************ 00:06:41.793 21:39:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.793 ************************************ 00:06:41.793 END TEST event 00:06:41.793 ************************************ 00:06:41.793 00:06:41.793 real 0m39.443s 00:06:41.793 user 1m17.119s 00:06:41.793 sys 0m6.881s 00:06:41.793 21:39:04 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.793 21:39:04 event -- common/autotest_common.sh@10 -- # set +x 00:06:41.793 21:39:04 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:41.793 21:39:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.793 21:39:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.793 21:39:04 -- common/autotest_common.sh@10 -- # set +x 00:06:41.793 ************************************ 00:06:41.793 START TEST thread 00:06:41.793 ************************************ 00:06:41.793 21:39:04 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:41.793 * Looking for test storage... 00:06:41.793 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:41.793 21:39:04 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:41.793 21:39:04 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:41.793 21:39:04 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:42.052 21:39:04 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:42.052 21:39:04 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.052 21:39:04 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.052 21:39:04 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.052 21:39:04 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.052 21:39:04 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.052 21:39:04 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.052 21:39:04 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.052 21:39:04 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.052 21:39:04 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.052 21:39:04 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.052 21:39:04 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.052 21:39:04 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:42.052 21:39:04 thread -- scripts/common.sh@345 -- # : 1 00:06:42.052 21:39:04 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.052 21:39:04 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.052 21:39:04 thread -- scripts/common.sh@365 -- # decimal 1 00:06:42.052 21:39:04 thread -- scripts/common.sh@353 -- # local d=1 00:06:42.052 21:39:04 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.052 21:39:04 thread -- scripts/common.sh@355 -- # echo 1 00:06:42.052 21:39:04 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.052 21:39:04 thread -- scripts/common.sh@366 -- # decimal 2 00:06:42.052 21:39:04 thread -- scripts/common.sh@353 -- # local d=2 00:06:42.052 21:39:04 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.052 21:39:04 thread -- scripts/common.sh@355 -- # echo 2 00:06:42.052 21:39:04 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.052 21:39:04 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.052 21:39:04 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.052 21:39:04 thread -- scripts/common.sh@368 -- # return 0 00:06:42.052 21:39:04 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.052 21:39:04 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:42.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.053 --rc genhtml_branch_coverage=1 00:06:42.053 --rc genhtml_function_coverage=1 00:06:42.053 --rc genhtml_legend=1 00:06:42.053 --rc geninfo_all_blocks=1 00:06:42.053 --rc geninfo_unexecuted_blocks=1 00:06:42.053 00:06:42.053 ' 00:06:42.053 21:39:04 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:42.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.053 --rc genhtml_branch_coverage=1 00:06:42.053 --rc genhtml_function_coverage=1 00:06:42.053 --rc genhtml_legend=1 00:06:42.053 --rc geninfo_all_blocks=1 00:06:42.053 --rc geninfo_unexecuted_blocks=1 00:06:42.053 00:06:42.053 ' 00:06:42.053 21:39:04 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:42.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.053 --rc genhtml_branch_coverage=1 00:06:42.053 --rc genhtml_function_coverage=1 00:06:42.053 --rc genhtml_legend=1 00:06:42.053 --rc geninfo_all_blocks=1 00:06:42.053 --rc geninfo_unexecuted_blocks=1 00:06:42.053 00:06:42.053 ' 00:06:42.053 21:39:04 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:42.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.053 --rc genhtml_branch_coverage=1 00:06:42.053 --rc genhtml_function_coverage=1 00:06:42.053 --rc genhtml_legend=1 00:06:42.053 --rc geninfo_all_blocks=1 00:06:42.053 --rc geninfo_unexecuted_blocks=1 00:06:42.053 00:06:42.053 ' 00:06:42.053 21:39:04 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:42.053 21:39:04 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:42.053 21:39:04 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.053 21:39:04 thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.053 ************************************ 00:06:42.053 START TEST thread_poller_perf 00:06:42.053 ************************************ 00:06:42.053 21:39:04 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:42.053 [2024-11-27 21:39:04.993881] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:42.053 [2024-11-27 21:39:04.994286] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71133 ] 00:06:42.053 [2024-11-27 21:39:05.124814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.053 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:42.053 [2024-11-27 21:39:05.140814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.430 [2024-11-27T21:39:06.551Z] ====================================== 00:06:43.430 [2024-11-27T21:39:06.551Z] busy:2606763054 (cyc) 00:06:43.430 [2024-11-27T21:39:06.551Z] total_run_count: 412000 00:06:43.430 [2024-11-27T21:39:06.551Z] tsc_hz: 2600000000 (cyc) 00:06:43.430 [2024-11-27T21:39:06.551Z] ====================================== 00:06:43.430 [2024-11-27T21:39:06.551Z] poller_cost: 6327 (cyc), 2433 (nsec) 00:06:43.430 00:06:43.430 real 0m1.214s 00:06:43.430 user 0m1.067s 00:06:43.430 sys 0m0.042s 00:06:43.430 21:39:06 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.430 ************************************ 00:06:43.430 END TEST thread_poller_perf 00:06:43.430 ************************************ 00:06:43.430 21:39:06 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:43.430 21:39:06 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:43.430 21:39:06 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:43.430 21:39:06 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.430 21:39:06 thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.430 ************************************ 00:06:43.430 START TEST thread_poller_perf 00:06:43.430 ************************************ 00:06:43.430 21:39:06 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:43.430 [2024-11-27 21:39:06.256472] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:43.430 [2024-11-27 21:39:06.256874] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71164 ] 00:06:43.430 [2024-11-27 21:39:06.402915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.430 [2024-11-27 21:39:06.421840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.430 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:44.371 [2024-11-27T21:39:07.492Z] ====================================== 00:06:44.371 [2024-11-27T21:39:07.492Z] busy:2603500262 (cyc) 00:06:44.371 [2024-11-27T21:39:07.492Z] total_run_count: 3970000 00:06:44.371 [2024-11-27T21:39:07.492Z] tsc_hz: 2600000000 (cyc) 00:06:44.371 [2024-11-27T21:39:07.492Z] ====================================== 00:06:44.371 [2024-11-27T21:39:07.492Z] poller_cost: 655 (cyc), 251 (nsec) 00:06:44.371 00:06:44.371 real 0m1.228s 00:06:44.371 user 0m1.076s 00:06:44.371 sys 0m0.046s 00:06:44.371 21:39:07 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.372 ************************************ 00:06:44.372 END TEST thread_poller_perf 00:06:44.372 21:39:07 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:44.372 ************************************ 00:06:44.630 21:39:07 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:44.630 00:06:44.630 real 0m2.675s 00:06:44.630 user 0m2.256s 00:06:44.630 sys 0m0.205s 00:06:44.630 21:39:07 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.630 21:39:07 thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.630 ************************************ 00:06:44.630 END TEST thread 00:06:44.630 ************************************ 00:06:44.630 21:39:07 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:44.630 21:39:07 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:44.630 21:39:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.630 21:39:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.630 21:39:07 -- common/autotest_common.sh@10 -- # set +x 00:06:44.630 ************************************ 00:06:44.630 START TEST app_cmdline 00:06:44.630 ************************************ 00:06:44.630 21:39:07 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:44.630 * Looking for test storage... 00:06:44.630 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:44.630 21:39:07 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:44.630 21:39:07 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:44.630 21:39:07 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:44.630 21:39:07 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:44.630 21:39:07 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.630 21:39:07 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.630 21:39:07 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.630 21:39:07 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.630 21:39:07 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.630 21:39:07 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.630 21:39:07 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.630 21:39:07 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.630 21:39:07 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.630 21:39:07 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.630 21:39:07 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.630 21:39:07 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:44.630 21:39:07 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:44.630 21:39:07 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.630 21:39:07 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.630 21:39:07 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:44.630 21:39:07 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:44.630 21:39:07 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.630 21:39:07 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:44.630 21:39:07 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.630 21:39:07 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:44.630 21:39:07 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:44.630 21:39:07 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.630 21:39:07 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:44.630 21:39:07 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.630 21:39:07 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.630 21:39:07 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.630 21:39:07 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:44.630 21:39:07 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.630 21:39:07 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:44.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.630 --rc genhtml_branch_coverage=1 00:06:44.630 --rc genhtml_function_coverage=1 00:06:44.630 --rc genhtml_legend=1 00:06:44.630 --rc geninfo_all_blocks=1 00:06:44.630 --rc geninfo_unexecuted_blocks=1 00:06:44.630 00:06:44.630 ' 00:06:44.630 21:39:07 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:44.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.630 --rc genhtml_branch_coverage=1 00:06:44.630 --rc genhtml_function_coverage=1 00:06:44.630 --rc genhtml_legend=1 00:06:44.630 --rc geninfo_all_blocks=1 00:06:44.630 --rc geninfo_unexecuted_blocks=1 00:06:44.630 00:06:44.630 ' 00:06:44.630 21:39:07 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:44.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.630 --rc genhtml_branch_coverage=1 00:06:44.630 --rc genhtml_function_coverage=1 00:06:44.630 --rc genhtml_legend=1 00:06:44.630 --rc geninfo_all_blocks=1 00:06:44.630 --rc geninfo_unexecuted_blocks=1 00:06:44.630 00:06:44.630 ' 00:06:44.630 21:39:07 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:44.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.630 --rc genhtml_branch_coverage=1 00:06:44.630 --rc genhtml_function_coverage=1 00:06:44.630 --rc genhtml_legend=1 00:06:44.630 --rc geninfo_all_blocks=1 00:06:44.630 --rc geninfo_unexecuted_blocks=1 00:06:44.630 00:06:44.630 ' 00:06:44.630 21:39:07 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:44.630 21:39:07 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71253 00:06:44.630 21:39:07 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71253 00:06:44.630 21:39:07 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:44.630 21:39:07 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 71253 ']' 00:06:44.630 21:39:07 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.630 21:39:07 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.630 21:39:07 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.630 21:39:07 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.630 21:39:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:44.891 [2024-11-27 21:39:07.763412] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:44.891 [2024-11-27 21:39:07.763526] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71253 ] 00:06:44.891 [2024-11-27 21:39:07.908543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.891 [2024-11-27 21:39:07.927560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.519 21:39:08 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.519 21:39:08 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:45.519 21:39:08 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:45.793 { 00:06:45.793 "version": "SPDK v25.01-pre git sha1 35cd3e84d", 00:06:45.793 "fields": { 00:06:45.793 "major": 25, 00:06:45.794 "minor": 1, 00:06:45.794 "patch": 0, 00:06:45.794 "suffix": "-pre", 00:06:45.794 "commit": "35cd3e84d" 00:06:45.794 } 00:06:45.794 } 00:06:45.794 21:39:08 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:45.794 21:39:08 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:45.794 21:39:08 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:45.794 21:39:08 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:45.794 21:39:08 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:45.794 21:39:08 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:45.794 21:39:08 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:45.794 21:39:08 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.794 21:39:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:45.794 21:39:08 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.794 21:39:08 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:45.794 21:39:08 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:45.794 21:39:08 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:45.794 21:39:08 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:45.794 21:39:08 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:45.794 21:39:08 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:45.794 21:39:08 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:45.794 21:39:08 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:45.794 21:39:08 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:45.794 21:39:08 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:45.794 21:39:08 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:45.794 21:39:08 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:45.794 21:39:08 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:45.794 21:39:08 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:46.054 request: 00:06:46.054 { 00:06:46.054 "method": "env_dpdk_get_mem_stats", 00:06:46.054 "req_id": 1 00:06:46.054 } 00:06:46.054 Got JSON-RPC error response 00:06:46.054 response: 00:06:46.054 { 00:06:46.054 "code": -32601, 00:06:46.054 "message": "Method not found" 00:06:46.054 } 00:06:46.054 21:39:09 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:46.054 21:39:09 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:46.054 21:39:09 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:46.054 21:39:09 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:46.054 21:39:09 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71253 00:06:46.054 21:39:09 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 71253 ']' 00:06:46.054 21:39:09 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 71253 00:06:46.054 21:39:09 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:46.054 21:39:09 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.054 21:39:09 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71253 00:06:46.054 21:39:09 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:46.054 killing process with pid 71253 00:06:46.054 21:39:09 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:46.054 21:39:09 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71253' 00:06:46.054 21:39:09 app_cmdline -- common/autotest_common.sh@973 -- # kill 71253 00:06:46.054 21:39:09 app_cmdline -- common/autotest_common.sh@978 -- # wait 71253 00:06:46.313 00:06:46.313 real 0m1.760s 00:06:46.313 user 0m2.102s 00:06:46.313 sys 0m0.382s 00:06:46.313 21:39:09 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.313 ************************************ 00:06:46.313 END TEST app_cmdline 00:06:46.313 ************************************ 00:06:46.313 21:39:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:46.313 21:39:09 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:46.313 21:39:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.313 21:39:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.313 21:39:09 -- common/autotest_common.sh@10 -- # set +x 00:06:46.313 ************************************ 00:06:46.313 START TEST version 00:06:46.313 ************************************ 00:06:46.313 21:39:09 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:46.574 * Looking for test storage... 00:06:46.574 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:46.574 21:39:09 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:46.574 21:39:09 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:46.574 21:39:09 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:46.574 21:39:09 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:46.574 21:39:09 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.574 21:39:09 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.574 21:39:09 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.574 21:39:09 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.574 21:39:09 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.574 21:39:09 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.574 21:39:09 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.574 21:39:09 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.574 21:39:09 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.574 21:39:09 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.574 21:39:09 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.574 21:39:09 version -- scripts/common.sh@344 -- # case "$op" in 00:06:46.574 21:39:09 version -- scripts/common.sh@345 -- # : 1 00:06:46.574 21:39:09 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.574 21:39:09 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.574 21:39:09 version -- scripts/common.sh@365 -- # decimal 1 00:06:46.574 21:39:09 version -- scripts/common.sh@353 -- # local d=1 00:06:46.574 21:39:09 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.574 21:39:09 version -- scripts/common.sh@355 -- # echo 1 00:06:46.574 21:39:09 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.574 21:39:09 version -- scripts/common.sh@366 -- # decimal 2 00:06:46.574 21:39:09 version -- scripts/common.sh@353 -- # local d=2 00:06:46.574 21:39:09 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.574 21:39:09 version -- scripts/common.sh@355 -- # echo 2 00:06:46.574 21:39:09 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.574 21:39:09 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.574 21:39:09 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.574 21:39:09 version -- scripts/common.sh@368 -- # return 0 00:06:46.574 21:39:09 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.574 21:39:09 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:46.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.574 --rc genhtml_branch_coverage=1 00:06:46.574 --rc genhtml_function_coverage=1 00:06:46.574 --rc genhtml_legend=1 00:06:46.574 --rc geninfo_all_blocks=1 00:06:46.574 --rc geninfo_unexecuted_blocks=1 00:06:46.574 00:06:46.574 ' 00:06:46.574 21:39:09 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:46.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.574 --rc genhtml_branch_coverage=1 00:06:46.574 --rc genhtml_function_coverage=1 00:06:46.574 --rc genhtml_legend=1 00:06:46.574 --rc geninfo_all_blocks=1 00:06:46.574 --rc geninfo_unexecuted_blocks=1 00:06:46.574 00:06:46.574 ' 00:06:46.574 21:39:09 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:46.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.574 --rc genhtml_branch_coverage=1 00:06:46.574 --rc genhtml_function_coverage=1 00:06:46.574 --rc genhtml_legend=1 00:06:46.574 --rc geninfo_all_blocks=1 00:06:46.574 --rc geninfo_unexecuted_blocks=1 00:06:46.574 00:06:46.574 ' 00:06:46.574 21:39:09 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:46.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.574 --rc genhtml_branch_coverage=1 00:06:46.574 --rc genhtml_function_coverage=1 00:06:46.574 --rc genhtml_legend=1 00:06:46.574 --rc geninfo_all_blocks=1 00:06:46.574 --rc geninfo_unexecuted_blocks=1 00:06:46.574 00:06:46.574 ' 00:06:46.574 21:39:09 version -- app/version.sh@17 -- # get_header_version major 00:06:46.574 21:39:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:46.574 21:39:09 version -- app/version.sh@14 -- # cut -f2 00:06:46.574 21:39:09 version -- app/version.sh@14 -- # tr -d '"' 00:06:46.574 21:39:09 version -- app/version.sh@17 -- # major=25 00:06:46.574 21:39:09 version -- app/version.sh@18 -- # get_header_version minor 00:06:46.574 21:39:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:46.574 21:39:09 version -- app/version.sh@14 -- # cut -f2 00:06:46.574 21:39:09 version -- app/version.sh@14 -- # tr -d '"' 00:06:46.574 21:39:09 version -- app/version.sh@18 -- # minor=1 00:06:46.574 21:39:09 version -- app/version.sh@19 -- # get_header_version patch 00:06:46.574 21:39:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:46.574 21:39:09 version -- app/version.sh@14 -- # tr -d '"' 00:06:46.574 21:39:09 version -- app/version.sh@14 -- # cut -f2 00:06:46.574 21:39:09 version -- app/version.sh@19 -- # patch=0 00:06:46.574 21:39:09 version -- app/version.sh@20 -- # get_header_version suffix 00:06:46.574 21:39:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:46.575 21:39:09 version -- app/version.sh@14 -- # cut -f2 00:06:46.575 21:39:09 version -- app/version.sh@14 -- # tr -d '"' 00:06:46.575 21:39:09 version -- app/version.sh@20 -- # suffix=-pre 00:06:46.575 21:39:09 version -- app/version.sh@22 -- # version=25.1 00:06:46.575 21:39:09 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:46.575 21:39:09 version -- app/version.sh@28 -- # version=25.1rc0 00:06:46.575 21:39:09 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:46.575 21:39:09 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:46.575 21:39:09 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:46.575 21:39:09 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:46.575 00:06:46.575 real 0m0.211s 00:06:46.575 user 0m0.123s 00:06:46.575 sys 0m0.108s 00:06:46.575 ************************************ 00:06:46.575 END TEST version 00:06:46.575 ************************************ 00:06:46.575 21:39:09 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.575 21:39:09 version -- common/autotest_common.sh@10 -- # set +x 00:06:46.575 21:39:09 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:46.575 21:39:09 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:46.575 21:39:09 -- spdk/autotest.sh@194 -- # uname -s 00:06:46.575 21:39:09 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:46.575 21:39:09 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:46.575 21:39:09 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:46.575 21:39:09 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:46.575 21:39:09 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:46.575 21:39:09 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:46.575 21:39:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.575 21:39:09 -- common/autotest_common.sh@10 -- # set +x 00:06:46.575 ************************************ 00:06:46.575 START TEST blockdev_nvme 00:06:46.575 ************************************ 00:06:46.575 21:39:09 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:46.835 * Looking for test storage... 00:06:46.835 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:46.835 21:39:09 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:46.835 21:39:09 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:06:46.835 21:39:09 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:46.835 21:39:09 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:46.835 21:39:09 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.835 21:39:09 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.835 21:39:09 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.835 21:39:09 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.835 21:39:09 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.835 21:39:09 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.835 21:39:09 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.835 21:39:09 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.835 21:39:09 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.835 21:39:09 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.835 21:39:09 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.835 21:39:09 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:46.835 21:39:09 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:46.835 21:39:09 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.835 21:39:09 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.835 21:39:09 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:46.835 21:39:09 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:46.835 21:39:09 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.835 21:39:09 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:46.835 21:39:09 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.835 21:39:09 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:46.835 21:39:09 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:46.835 21:39:09 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.835 21:39:09 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:46.835 21:39:09 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.835 21:39:09 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.835 21:39:09 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.835 21:39:09 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:46.835 21:39:09 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.835 21:39:09 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:46.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.835 --rc genhtml_branch_coverage=1 00:06:46.835 --rc genhtml_function_coverage=1 00:06:46.835 --rc genhtml_legend=1 00:06:46.835 --rc geninfo_all_blocks=1 00:06:46.835 --rc geninfo_unexecuted_blocks=1 00:06:46.835 00:06:46.835 ' 00:06:46.835 21:39:09 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:46.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.835 --rc genhtml_branch_coverage=1 00:06:46.835 --rc genhtml_function_coverage=1 00:06:46.835 --rc genhtml_legend=1 00:06:46.835 --rc geninfo_all_blocks=1 00:06:46.835 --rc geninfo_unexecuted_blocks=1 00:06:46.835 00:06:46.835 ' 00:06:46.835 21:39:09 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:46.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.835 --rc genhtml_branch_coverage=1 00:06:46.835 --rc genhtml_function_coverage=1 00:06:46.835 --rc genhtml_legend=1 00:06:46.835 --rc geninfo_all_blocks=1 00:06:46.835 --rc geninfo_unexecuted_blocks=1 00:06:46.835 00:06:46.835 ' 00:06:46.835 21:39:09 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:46.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.836 --rc genhtml_branch_coverage=1 00:06:46.836 --rc genhtml_function_coverage=1 00:06:46.836 --rc genhtml_legend=1 00:06:46.836 --rc geninfo_all_blocks=1 00:06:46.836 --rc geninfo_unexecuted_blocks=1 00:06:46.836 00:06:46.836 ' 00:06:46.836 21:39:09 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:46.836 21:39:09 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:46.836 21:39:09 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:46.836 21:39:09 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:46.836 21:39:09 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:46.836 21:39:09 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:46.836 21:39:09 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:46.836 21:39:09 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:46.836 21:39:09 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:46.836 21:39:09 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:06:46.836 21:39:09 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:06:46.836 21:39:09 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:06:46.836 21:39:09 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:06:46.836 21:39:09 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:06:46.836 21:39:09 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:06:46.836 21:39:09 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:06:46.836 21:39:09 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:06:46.836 21:39:09 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:06:46.836 21:39:09 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:06:46.836 21:39:09 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:06:46.836 21:39:09 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:06:46.836 21:39:09 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:06:46.836 21:39:09 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:06:46.836 21:39:09 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:06:46.836 21:39:09 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71414 00:06:46.836 21:39:09 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:46.836 21:39:09 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 71414 00:06:46.836 21:39:09 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 71414 ']' 00:06:46.836 21:39:09 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:46.836 21:39:09 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.836 21:39:09 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.836 21:39:09 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.836 21:39:09 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.836 21:39:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:46.836 [2024-11-27 21:39:09.898124] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:46.836 [2024-11-27 21:39:09.898542] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71414 ] 00:06:47.096 [2024-11-27 21:39:10.046054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.096 [2024-11-27 21:39:10.074812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.668 21:39:10 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.668 21:39:10 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:06:47.668 21:39:10 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:06:47.668 21:39:10 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:06:47.668 21:39:10 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:47.668 21:39:10 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:47.668 21:39:10 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:47.928 21:39:10 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:47.928 21:39:10 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.928 21:39:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:48.190 21:39:11 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.190 21:39:11 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:06:48.190 21:39:11 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.190 21:39:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:48.190 21:39:11 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.190 21:39:11 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:06:48.190 21:39:11 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:06:48.190 21:39:11 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.190 21:39:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:48.190 21:39:11 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.190 21:39:11 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:06:48.190 21:39:11 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.190 21:39:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:48.190 21:39:11 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.190 21:39:11 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:48.190 21:39:11 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.190 21:39:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:48.190 21:39:11 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.190 21:39:11 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:06:48.190 21:39:11 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:06:48.190 21:39:11 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:06:48.190 21:39:11 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.190 21:39:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:48.190 21:39:11 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.190 21:39:11 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:06:48.191 21:39:11 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "3ea64754-dc19-4b6f-bc16-ee3bf543f4e6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "3ea64754-dc19-4b6f-bc16-ee3bf543f4e6",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "1679d9cf-545d-44a0-a3da-7cb41eeefacc"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "1679d9cf-545d-44a0-a3da-7cb41eeefacc",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "61e82446-a05c-45cf-94fe-c0206a9734d3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "61e82446-a05c-45cf-94fe-c0206a9734d3",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "1113b254-08a1-432a-85b8-186a91991222"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1113b254-08a1-432a-85b8-186a91991222",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "edf0ff47-1f3b-4b70-a18a-6b921364a6eb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "edf0ff47-1f3b-4b70-a18a-6b921364a6eb",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "5bb6f2a3-61a1-4275-a386-3accd3e39ca9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "5bb6f2a3-61a1-4275-a386-3accd3e39ca9",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:48.191 21:39:11 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:06:48.191 21:39:11 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:06:48.191 21:39:11 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:06:48.191 21:39:11 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:06:48.191 21:39:11 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 71414 00:06:48.191 21:39:11 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 71414 ']' 00:06:48.191 21:39:11 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 71414 00:06:48.191 21:39:11 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:06:48.191 21:39:11 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:48.191 21:39:11 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71414 00:06:48.191 killing process with pid 71414 00:06:48.191 21:39:11 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:48.191 21:39:11 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:48.191 21:39:11 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71414' 00:06:48.191 21:39:11 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 71414 00:06:48.191 21:39:11 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 71414 00:06:48.759 21:39:11 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:48.759 21:39:11 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:48.759 21:39:11 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:48.759 21:39:11 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.759 21:39:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:48.759 ************************************ 00:06:48.759 START TEST bdev_hello_world 00:06:48.759 ************************************ 00:06:48.759 21:39:11 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:48.759 [2024-11-27 21:39:11.654435] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:48.759 [2024-11-27 21:39:11.654691] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71484 ] 00:06:48.759 [2024-11-27 21:39:11.803034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.759 [2024-11-27 21:39:11.832258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.326 [2024-11-27 21:39:12.237785] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:49.326 [2024-11-27 21:39:12.237849] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:49.326 [2024-11-27 21:39:12.237872] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:49.326 [2024-11-27 21:39:12.240261] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:49.326 [2024-11-27 21:39:12.241247] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:49.326 [2024-11-27 21:39:12.241289] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:49.326 [2024-11-27 21:39:12.241718] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:49.326 00:06:49.326 [2024-11-27 21:39:12.241751] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:49.326 00:06:49.326 real 0m0.845s 00:06:49.326 user 0m0.546s 00:06:49.326 sys 0m0.192s 00:06:49.326 21:39:12 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.326 21:39:12 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:49.326 ************************************ 00:06:49.326 END TEST bdev_hello_world 00:06:49.326 ************************************ 00:06:49.586 21:39:12 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:06:49.586 21:39:12 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:49.586 21:39:12 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.586 21:39:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:49.586 ************************************ 00:06:49.586 START TEST bdev_bounds 00:06:49.586 ************************************ 00:06:49.586 Process bdevio pid: 71507 00:06:49.586 21:39:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:49.586 21:39:12 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=71507 00:06:49.586 21:39:12 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:49.586 21:39:12 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 71507' 00:06:49.586 21:39:12 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 71507 00:06:49.586 21:39:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 71507 ']' 00:06:49.586 21:39:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.586 21:39:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.586 21:39:12 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:49.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.586 21:39:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.586 21:39:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.586 21:39:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:49.586 [2024-11-27 21:39:12.573713] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:49.586 [2024-11-27 21:39:12.573872] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71507 ] 00:06:49.846 [2024-11-27 21:39:12.721025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:49.846 [2024-11-27 21:39:12.753505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.846 [2024-11-27 21:39:12.753673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.846 [2024-11-27 21:39:12.753695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.420 21:39:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.420 21:39:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:50.420 21:39:13 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:50.420 I/O targets: 00:06:50.420 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:50.420 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:50.420 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:50.420 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:50.420 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:50.420 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:50.420 00:06:50.420 00:06:50.420 CUnit - A unit testing framework for C - Version 2.1-3 00:06:50.420 http://cunit.sourceforge.net/ 00:06:50.420 00:06:50.420 00:06:50.420 Suite: bdevio tests on: Nvme3n1 00:06:50.420 Test: blockdev write read block ...passed 00:06:50.420 Test: blockdev write zeroes read block ...passed 00:06:50.681 Test: blockdev write zeroes read no split ...passed 00:06:50.681 Test: blockdev write zeroes read split ...passed 00:06:50.681 Test: blockdev write zeroes read split partial ...passed 00:06:50.681 Test: blockdev reset ...[2024-11-27 21:39:13.553550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:50.681 [2024-11-27 21:39:13.557365] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spassed 00:06:50.681 Test: blockdev write read 8 blocks ...uccessful. 00:06:50.681 passed 00:06:50.681 Test: blockdev write read size > 128k ...passed 00:06:50.681 Test: blockdev write read invalid size ...passed 00:06:50.681 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:50.681 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:50.681 Test: blockdev write read max offset ...passed 00:06:50.681 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:50.681 Test: blockdev writev readv 8 blocks ...passed 00:06:50.681 Test: blockdev writev readv 30 x 1block ...passed 00:06:50.681 Test: blockdev writev readv block ...passed 00:06:50.681 Test: blockdev writev readv size > 128k ...passed 00:06:50.681 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:50.681 Test: blockdev comparev and writev ...[2024-11-27 21:39:13.574238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:06:50.681 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2c2a06000 len:0x1000 00:06:50.681 [2024-11-27 21:39:13.574461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:50.681 passed 00:06:50.681 Test: blockdev nvme passthru vendor specific ...passed 00:06:50.681 Test: blockdev nvme admin passthru ...[2024-11-27 21:39:13.575738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:50.681 [2024-11-27 21:39:13.575812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:50.681 passed 00:06:50.681 Test: blockdev copy ...passed 00:06:50.681 Suite: bdevio tests on: Nvme2n3 00:06:50.681 Test: blockdev write read block ...passed 00:06:50.681 Test: blockdev write zeroes read block ...passed 00:06:50.681 Test: blockdev write zeroes read no split ...passed 00:06:50.681 Test: blockdev write zeroes read split ...passed 00:06:50.681 Test: blockdev write zeroes read split partial ...passed 00:06:50.681 Test: blockdev reset ...[2024-11-27 21:39:13.604731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:50.681 passed 00:06:50.681 Test: blockdev write read 8 blocks ...[2024-11-27 21:39:13.607939] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:50.681 passed 00:06:50.681 Test: blockdev write read size > 128k ...passed 00:06:50.681 Test: blockdev write read invalid size ...passed 00:06:50.681 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:50.681 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:50.681 Test: blockdev write read max offset ...passed 00:06:50.681 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:50.681 Test: blockdev writev readv 8 blocks ...passed 00:06:50.681 Test: blockdev writev readv 30 x 1block ...passed 00:06:50.681 Test: blockdev writev readv block ...passed 00:06:50.681 Test: blockdev writev readv size > 128k ...passed 00:06:50.681 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:50.681 Test: blockdev comparev and writev ...[2024-11-27 21:39:13.625618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bf202000 len:0x1000 00:06:50.681 [2024-11-27 21:39:13.625807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:50.681 passed 00:06:50.681 Test: blockdev nvme passthru rw ...passed 00:06:50.681 Test: blockdev nvme passthru vendor specific ...passed 00:06:50.682 Test: blockdev nvme admin passthru ...[2024-11-27 21:39:13.628035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:50.682 [2024-11-27 21:39:13.628085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:50.682 passed 00:06:50.682 Test: blockdev copy ...passed 00:06:50.682 Suite: bdevio tests on: Nvme2n2 00:06:50.682 Test: blockdev write read block ...passed 00:06:50.682 Test: blockdev write zeroes read block ...passed 00:06:50.682 Test: blockdev write zeroes read no split ...passed 00:06:50.682 Test: blockdev write zeroes read split ...passed 00:06:50.682 Test: blockdev write zeroes read split partial ...passed 00:06:50.682 Test: blockdev reset ...[2024-11-27 21:39:13.657822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:50.682 passed 00:06:50.682 Test: blockdev write read 8 blocks ...[2024-11-27 21:39:13.662102] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:50.682 passed 00:06:50.682 Test: blockdev write read size > 128k ...passed 00:06:50.682 Test: blockdev write read invalid size ...passed 00:06:50.682 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:50.682 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:50.682 Test: blockdev write read max offset ...passed 00:06:50.682 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:50.682 Test: blockdev writev readv 8 blocks ...passed 00:06:50.682 Test: blockdev writev readv 30 x 1block ...passed 00:06:50.682 Test: blockdev writev readv block ...passed 00:06:50.682 Test: blockdev writev readv size > 128k ...passed 00:06:50.682 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:50.682 Test: blockdev comparev and writev ...[2024-11-27 21:39:13.679427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d663b000 len:0x1000 00:06:50.682 [2024-11-27 21:39:13.679604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:50.682 passed 00:06:50.682 Test: blockdev nvme passthru rw ...passed 00:06:50.682 Test: blockdev nvme passthru vendor specific ...[2024-11-27 21:39:13.682202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:50.682 [2024-11-27 21:39:13.682244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:50.682 passed 00:06:50.682 Test: blockdev nvme admin passthru ...passed 00:06:50.682 Test: blockdev copy ...passed 00:06:50.682 Suite: bdevio tests on: Nvme2n1 00:06:50.682 Test: blockdev write read block ...passed 00:06:50.682 Test: blockdev write zeroes read block ...passed 00:06:50.682 Test: blockdev write zeroes read no split ...passed 00:06:50.682 Test: blockdev write zeroes read split ...passed 00:06:50.682 Test: blockdev write zeroes read split partial ...passed 00:06:50.682 Test: blockdev reset ...[2024-11-27 21:39:13.710242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:50.682 [2024-11-27 21:39:13.714078] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:50.682 Test: blockdev write read 8 blocks ...uccessful. 00:06:50.682 passed 00:06:50.682 Test: blockdev write read size > 128k ...passed 00:06:50.682 Test: blockdev write read invalid size ...passed 00:06:50.682 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:50.682 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:50.682 Test: blockdev write read max offset ...passed 00:06:50.682 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:50.682 Test: blockdev writev readv 8 blocks ...passed 00:06:50.682 Test: blockdev writev readv 30 x 1block ...passed 00:06:50.682 Test: blockdev writev readv block ...passed 00:06:50.682 Test: blockdev writev readv size > 128k ...passed 00:06:50.682 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:50.682 Test: blockdev comparev and writev ...[2024-11-27 21:39:13.731287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:06:50.682 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2d6637000 len:0x1000 00:06:50.682 [2024-11-27 21:39:13.731485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:50.682 passed 00:06:50.682 Test: blockdev nvme passthru vendor specific ...passed 00:06:50.682 Test: blockdev nvme admin passthru ...[2024-11-27 21:39:13.733838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:50.682 [2024-11-27 21:39:13.733891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:50.682 passed 00:06:50.682 Test: blockdev copy ...passed 00:06:50.682 Suite: bdevio tests on: Nvme1n1 00:06:50.682 Test: blockdev write read block ...passed 00:06:50.682 Test: blockdev write zeroes read block ...passed 00:06:50.682 Test: blockdev write zeroes read no split ...passed 00:06:50.682 Test: blockdev write zeroes read split ...passed 00:06:50.682 Test: blockdev write zeroes read split partial ...passed 00:06:50.682 Test: blockdev reset ...[2024-11-27 21:39:13.764841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:50.682 [2024-11-27 21:39:13.767199] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:50.682 passed 00:06:50.682 Test: blockdev write read 8 blocks ...passed 00:06:50.682 Test: blockdev write read size > 128k ...passed 00:06:50.682 Test: blockdev write read invalid size ...passed 00:06:50.682 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:50.682 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:50.682 Test: blockdev write read max offset ...passed 00:06:50.682 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:50.682 Test: blockdev writev readv 8 blocks ...passed 00:06:50.682 Test: blockdev writev readv 30 x 1block ...passed 00:06:50.682 Test: blockdev writev readv block ...passed 00:06:50.682 Test: blockdev writev readv size > 128k ...passed 00:06:50.682 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:50.682 Test: blockdev comparev and writev ...[2024-11-27 21:39:13.774888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:06:50.682 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2d6633000 len:0x1000 00:06:50.682 [2024-11-27 21:39:13.775064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:50.682 passed 00:06:50.682 Test: blockdev nvme passthru vendor specific ...[2024-11-27 21:39:13.776371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:50.682 [2024-11-27 21:39:13.776414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:50.682 passed 00:06:50.682 Test: blockdev nvme admin passthru ...passed 00:06:50.682 Test: blockdev copy ...passed 00:06:50.682 Suite: bdevio tests on: Nvme0n1 00:06:50.682 Test: blockdev write read block ...passed 00:06:50.682 Test: blockdev write zeroes read block ...passed 00:06:50.682 Test: blockdev write zeroes read no split ...passed 00:06:50.944 Test: blockdev write zeroes read split ...passed 00:06:50.944 Test: blockdev write zeroes read split partial ...passed 00:06:50.944 Test: blockdev reset ...[2024-11-27 21:39:13.811769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:50.944 [2024-11-27 21:39:13.814286] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:50.944 passed 00:06:50.944 Test: blockdev write read 8 blocks ...passed 00:06:50.944 Test: blockdev write read size > 128k ...passed 00:06:50.944 Test: blockdev write read invalid size ...passed 00:06:50.944 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:50.944 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:50.944 Test: blockdev write read max offset ...passed 00:06:50.944 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:50.944 Test: blockdev writev readv 8 blocks ...passed 00:06:50.944 Test: blockdev writev readv 30 x 1block ...passed 00:06:50.944 Test: blockdev writev readv block ...passed 00:06:50.944 Test: blockdev writev readv size > 128k ...passed 00:06:50.944 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:50.944 Test: blockdev comparev and writev ...passed 00:06:50.944 Test: blockdev nvme passthru rw ...[2024-11-27 21:39:13.824285] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:50.944 separate metadata which is not supported yet. 00:06:50.944 passed 00:06:50.944 Test: blockdev nvme passthru vendor specific ...[2024-11-27 21:39:13.825478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 Ppassed 00:06:50.944 Test: blockdev nvme admin passthru ...RP2 0x0 00:06:50.944 [2024-11-27 21:39:13.825818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:50.944 passed 00:06:50.944 Test: blockdev copy ...passed 00:06:50.944 00:06:50.944 Run Summary: Type Total Ran Passed Failed Inactive 00:06:50.944 suites 6 6 n/a 0 0 00:06:50.944 tests 138 138 138 0 0 00:06:50.944 asserts 893 893 893 0 n/a 00:06:50.944 00:06:50.944 Elapsed time = 0.674 seconds 00:06:50.944 0 00:06:50.944 21:39:13 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 71507 00:06:50.945 21:39:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 71507 ']' 00:06:50.945 21:39:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 71507 00:06:50.945 21:39:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:50.945 21:39:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:50.945 21:39:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71507 00:06:50.945 21:39:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:50.945 21:39:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:50.945 21:39:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71507' 00:06:50.945 killing process with pid 71507 00:06:50.945 21:39:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 71507 00:06:50.945 21:39:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 71507 00:06:50.945 21:39:14 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:50.945 00:06:50.945 real 0m1.540s 00:06:50.945 user 0m3.802s 00:06:50.945 sys 0m0.338s 00:06:50.945 21:39:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.945 ************************************ 00:06:50.945 21:39:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:50.945 END TEST bdev_bounds 00:06:50.945 ************************************ 00:06:51.206 21:39:14 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:51.206 21:39:14 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:51.206 21:39:14 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.206 21:39:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:51.206 ************************************ 00:06:51.206 START TEST bdev_nbd 00:06:51.206 ************************************ 00:06:51.206 21:39:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:51.206 21:39:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:51.206 21:39:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:51.206 21:39:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.206 21:39:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:51.206 21:39:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:51.207 21:39:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:51.207 21:39:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:51.207 21:39:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:51.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:51.207 21:39:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:51.207 21:39:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:51.207 21:39:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:51.207 21:39:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:51.207 21:39:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:51.207 21:39:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:51.207 21:39:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:51.207 21:39:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=71561 00:06:51.207 21:39:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:51.207 21:39:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 71561 /var/tmp/spdk-nbd.sock 00:06:51.207 21:39:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 71561 ']' 00:06:51.207 21:39:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:51.207 21:39:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:51.207 21:39:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.207 21:39:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:51.207 21:39:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.207 21:39:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:51.207 [2024-11-27 21:39:14.184223] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:51.207 [2024-11-27 21:39:14.184547] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:51.468 [2024-11-27 21:39:14.334904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.468 [2024-11-27 21:39:14.364258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.039 21:39:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.039 21:39:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:52.039 21:39:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:52.039 21:39:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.039 21:39:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:52.039 21:39:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:52.039 21:39:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:52.039 21:39:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.039 21:39:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:52.039 21:39:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:52.039 21:39:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:52.039 21:39:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:52.039 21:39:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:52.039 21:39:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:52.039 21:39:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:52.300 21:39:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:52.300 21:39:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:52.300 21:39:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:52.300 21:39:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:52.300 21:39:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:52.300 21:39:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:52.300 21:39:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:52.300 21:39:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:52.300 21:39:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:52.300 21:39:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:52.300 21:39:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:52.300 21:39:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:52.300 1+0 records in 00:06:52.300 1+0 records out 00:06:52.300 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00112098 s, 3.7 MB/s 00:06:52.300 21:39:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.300 21:39:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:52.300 21:39:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.300 21:39:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:52.300 21:39:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:52.300 21:39:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:52.300 21:39:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:52.300 21:39:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:52.562 21:39:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:52.562 21:39:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:52.562 21:39:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:52.562 21:39:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:52.562 21:39:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:52.562 21:39:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:52.562 21:39:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:52.562 21:39:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:52.562 21:39:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:52.562 21:39:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:52.562 21:39:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:52.562 21:39:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:52.562 1+0 records in 00:06:52.562 1+0 records out 00:06:52.562 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100186 s, 4.1 MB/s 00:06:52.562 21:39:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.562 21:39:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:52.562 21:39:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.562 21:39:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:52.562 21:39:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:52.562 21:39:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:52.562 21:39:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:52.562 21:39:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:52.824 21:39:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:52.824 21:39:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:52.824 21:39:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:52.824 21:39:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:52.824 21:39:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:52.824 21:39:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:52.824 21:39:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:52.824 21:39:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:52.824 21:39:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:52.824 21:39:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:52.824 21:39:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:52.824 21:39:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:52.824 1+0 records in 00:06:52.824 1+0 records out 00:06:52.824 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00069734 s, 5.9 MB/s 00:06:52.824 21:39:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.824 21:39:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:52.824 21:39:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:52.824 21:39:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:52.824 21:39:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:52.824 21:39:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:52.824 21:39:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:52.824 21:39:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:53.086 21:39:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:53.086 21:39:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:53.086 21:39:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:53.086 21:39:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:53.086 21:39:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:53.086 21:39:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:53.086 21:39:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:53.086 21:39:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:53.086 21:39:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:53.086 21:39:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:53.086 21:39:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:53.086 21:39:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:53.086 1+0 records in 00:06:53.086 1+0 records out 00:06:53.086 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00127696 s, 3.2 MB/s 00:06:53.086 21:39:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:53.086 21:39:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:53.086 21:39:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:53.086 21:39:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:53.086 21:39:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:53.086 21:39:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:53.086 21:39:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:53.086 21:39:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:53.346 21:39:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:53.346 21:39:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:53.346 21:39:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:53.346 21:39:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:53.346 21:39:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:53.346 21:39:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:53.346 21:39:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:53.347 21:39:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:53.347 21:39:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:53.347 21:39:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:53.347 21:39:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:53.347 21:39:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:53.347 1+0 records in 00:06:53.347 1+0 records out 00:06:53.347 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00128977 s, 3.2 MB/s 00:06:53.347 21:39:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:53.347 21:39:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:53.347 21:39:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:53.347 21:39:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:53.347 21:39:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:53.347 21:39:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:53.347 21:39:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:53.347 21:39:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:53.608 21:39:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:53.608 21:39:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:53.608 21:39:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:53.608 21:39:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:53.608 21:39:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:53.608 21:39:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:53.608 21:39:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:53.608 21:39:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:53.608 21:39:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:53.608 21:39:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:53.608 21:39:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:53.608 21:39:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:53.608 1+0 records in 00:06:53.608 1+0 records out 00:06:53.608 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00124184 s, 3.3 MB/s 00:06:53.608 21:39:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:53.608 21:39:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:53.608 21:39:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:53.608 21:39:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:53.608 21:39:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:53.608 21:39:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:53.608 21:39:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:53.608 21:39:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:53.870 21:39:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:53.870 { 00:06:53.870 "nbd_device": "/dev/nbd0", 00:06:53.870 "bdev_name": "Nvme0n1" 00:06:53.870 }, 00:06:53.870 { 00:06:53.870 "nbd_device": "/dev/nbd1", 00:06:53.870 "bdev_name": "Nvme1n1" 00:06:53.870 }, 00:06:53.870 { 00:06:53.870 "nbd_device": "/dev/nbd2", 00:06:53.870 "bdev_name": "Nvme2n1" 00:06:53.870 }, 00:06:53.870 { 00:06:53.870 "nbd_device": "/dev/nbd3", 00:06:53.870 "bdev_name": "Nvme2n2" 00:06:53.870 }, 00:06:53.870 { 00:06:53.870 "nbd_device": "/dev/nbd4", 00:06:53.870 "bdev_name": "Nvme2n3" 00:06:53.870 }, 00:06:53.870 { 00:06:53.870 "nbd_device": "/dev/nbd5", 00:06:53.870 "bdev_name": "Nvme3n1" 00:06:53.870 } 00:06:53.870 ]' 00:06:53.870 21:39:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:53.870 21:39:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:53.870 { 00:06:53.870 "nbd_device": "/dev/nbd0", 00:06:53.870 "bdev_name": "Nvme0n1" 00:06:53.870 }, 00:06:53.870 { 00:06:53.870 "nbd_device": "/dev/nbd1", 00:06:53.870 "bdev_name": "Nvme1n1" 00:06:53.870 }, 00:06:53.870 { 00:06:53.870 "nbd_device": "/dev/nbd2", 00:06:53.870 "bdev_name": "Nvme2n1" 00:06:53.870 }, 00:06:53.870 { 00:06:53.870 "nbd_device": "/dev/nbd3", 00:06:53.870 "bdev_name": "Nvme2n2" 00:06:53.870 }, 00:06:53.870 { 00:06:53.870 "nbd_device": "/dev/nbd4", 00:06:53.870 "bdev_name": "Nvme2n3" 00:06:53.870 }, 00:06:53.870 { 00:06:53.870 "nbd_device": "/dev/nbd5", 00:06:53.870 "bdev_name": "Nvme3n1" 00:06:53.870 } 00:06:53.870 ]' 00:06:53.870 21:39:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:53.870 21:39:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:53.870 21:39:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.870 21:39:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:53.870 21:39:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:53.870 21:39:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:53.870 21:39:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.870 21:39:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:54.131 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:54.131 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:54.131 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:54.131 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.131 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.131 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:54.131 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:54.131 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.131 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.131 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:54.393 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:54.393 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:54.393 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:54.393 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.393 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.393 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:54.393 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:54.393 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.393 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.393 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:54.393 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:54.653 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:54.653 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:54.653 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.653 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.653 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:54.653 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:54.653 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.653 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.653 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:54.653 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:54.653 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:54.653 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:54.653 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.653 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.653 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:54.653 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:54.653 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.653 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.653 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:54.914 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:54.914 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:54.914 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:54.914 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.914 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.914 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:54.914 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:54.914 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.914 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.914 21:39:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:55.216 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:55.217 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:55.217 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:55.217 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:55.217 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:55.217 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:55.217 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:55.217 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:55.217 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:55.217 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.217 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:55.492 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:55.492 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:55.492 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:55.492 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:55.492 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:55.492 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:55.492 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:55.492 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:55.492 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:55.492 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:55.492 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:55.492 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:55.492 21:39:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:55.492 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.492 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:55.492 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:55.492 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:55.492 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:55.492 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:55.492 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.492 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:55.492 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:55.492 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:55.492 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:55.493 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:55.493 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:55.493 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:55.493 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:55.493 /dev/nbd0 00:06:55.751 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:55.751 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:55.751 21:39:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:55.751 21:39:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:55.751 21:39:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:55.751 21:39:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:55.751 21:39:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:55.751 21:39:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:55.751 21:39:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:55.751 21:39:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:55.751 21:39:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:55.751 1+0 records in 00:06:55.751 1+0 records out 00:06:55.751 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421196 s, 9.7 MB/s 00:06:55.751 21:39:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.751 21:39:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:55.751 21:39:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.751 21:39:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:55.751 21:39:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:55.751 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:55.751 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:55.751 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:55.751 /dev/nbd1 00:06:55.751 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:55.751 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:55.751 21:39:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:55.751 21:39:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:55.751 21:39:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:55.751 21:39:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:55.751 21:39:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:55.751 21:39:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:55.751 21:39:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:55.751 21:39:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:55.751 21:39:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:55.751 1+0 records in 00:06:55.751 1+0 records out 00:06:55.751 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000507506 s, 8.1 MB/s 00:06:55.751 21:39:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.751 21:39:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:55.751 21:39:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.751 21:39:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:55.751 21:39:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:55.751 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:55.751 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:55.751 21:39:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:56.009 /dev/nbd10 00:06:56.009 21:39:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:56.009 21:39:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:56.009 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:56.009 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:56.009 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:56.009 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:56.009 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:56.009 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:56.009 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:56.009 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:56.009 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:56.009 1+0 records in 00:06:56.009 1+0 records out 00:06:56.009 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283099 s, 14.5 MB/s 00:06:56.009 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.009 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:56.009 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.009 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:56.009 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:56.009 21:39:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:56.009 21:39:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:56.009 21:39:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:56.269 /dev/nbd11 00:06:56.269 21:39:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:56.269 21:39:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:56.269 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:56.269 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:56.269 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:56.269 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:56.269 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:56.269 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:56.269 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:56.269 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:56.269 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:56.269 1+0 records in 00:06:56.269 1+0 records out 00:06:56.269 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0004459 s, 9.2 MB/s 00:06:56.269 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.269 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:56.269 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.270 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:56.270 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:56.270 21:39:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:56.270 21:39:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:56.270 21:39:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:56.528 /dev/nbd12 00:06:56.528 21:39:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:56.528 21:39:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:56.528 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:56.528 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:56.528 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:56.528 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:56.528 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:56.528 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:56.528 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:56.528 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:56.528 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:56.528 1+0 records in 00:06:56.528 1+0 records out 00:06:56.528 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000409321 s, 10.0 MB/s 00:06:56.528 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.528 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:56.528 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.529 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:56.529 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:56.529 21:39:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:56.529 21:39:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:56.529 21:39:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:56.786 /dev/nbd13 00:06:56.786 21:39:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:56.786 21:39:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:56.786 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:56.786 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:56.786 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:56.786 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:56.786 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:56.786 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:56.786 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:56.786 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:56.786 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:56.786 1+0 records in 00:06:56.786 1+0 records out 00:06:56.786 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000520609 s, 7.9 MB/s 00:06:56.786 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.787 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:56.787 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.787 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:56.787 21:39:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:56.787 21:39:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:56.787 21:39:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:56.787 21:39:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:56.787 21:39:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.787 21:39:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:57.044 21:39:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:57.044 { 00:06:57.044 "nbd_device": "/dev/nbd0", 00:06:57.044 "bdev_name": "Nvme0n1" 00:06:57.044 }, 00:06:57.044 { 00:06:57.044 "nbd_device": "/dev/nbd1", 00:06:57.044 "bdev_name": "Nvme1n1" 00:06:57.044 }, 00:06:57.044 { 00:06:57.044 "nbd_device": "/dev/nbd10", 00:06:57.044 "bdev_name": "Nvme2n1" 00:06:57.044 }, 00:06:57.044 { 00:06:57.044 "nbd_device": "/dev/nbd11", 00:06:57.044 "bdev_name": "Nvme2n2" 00:06:57.045 }, 00:06:57.045 { 00:06:57.045 "nbd_device": "/dev/nbd12", 00:06:57.045 "bdev_name": "Nvme2n3" 00:06:57.045 }, 00:06:57.045 { 00:06:57.045 "nbd_device": "/dev/nbd13", 00:06:57.045 "bdev_name": "Nvme3n1" 00:06:57.045 } 00:06:57.045 ]' 00:06:57.045 21:39:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:57.045 21:39:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:57.045 { 00:06:57.045 "nbd_device": "/dev/nbd0", 00:06:57.045 "bdev_name": "Nvme0n1" 00:06:57.045 }, 00:06:57.045 { 00:06:57.045 "nbd_device": "/dev/nbd1", 00:06:57.045 "bdev_name": "Nvme1n1" 00:06:57.045 }, 00:06:57.045 { 00:06:57.045 "nbd_device": "/dev/nbd10", 00:06:57.045 "bdev_name": "Nvme2n1" 00:06:57.045 }, 00:06:57.045 { 00:06:57.045 "nbd_device": "/dev/nbd11", 00:06:57.045 "bdev_name": "Nvme2n2" 00:06:57.045 }, 00:06:57.045 { 00:06:57.045 "nbd_device": "/dev/nbd12", 00:06:57.045 "bdev_name": "Nvme2n3" 00:06:57.045 }, 00:06:57.045 { 00:06:57.045 "nbd_device": "/dev/nbd13", 00:06:57.045 "bdev_name": "Nvme3n1" 00:06:57.045 } 00:06:57.045 ]' 00:06:57.045 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:57.045 /dev/nbd1 00:06:57.045 /dev/nbd10 00:06:57.045 /dev/nbd11 00:06:57.045 /dev/nbd12 00:06:57.045 /dev/nbd13' 00:06:57.045 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:57.045 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:57.045 /dev/nbd1 00:06:57.045 /dev/nbd10 00:06:57.045 /dev/nbd11 00:06:57.045 /dev/nbd12 00:06:57.045 /dev/nbd13' 00:06:57.045 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:57.045 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:57.045 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:57.045 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:57.045 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:57.045 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:57.045 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:57.045 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:57.045 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:57.045 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:57.045 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:57.045 256+0 records in 00:06:57.045 256+0 records out 00:06:57.045 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128646 s, 81.5 MB/s 00:06:57.045 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:57.045 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:57.045 256+0 records in 00:06:57.045 256+0 records out 00:06:57.045 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0479475 s, 21.9 MB/s 00:06:57.045 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:57.045 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:57.045 256+0 records in 00:06:57.045 256+0 records out 00:06:57.045 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0473291 s, 22.2 MB/s 00:06:57.045 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:57.045 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:57.303 256+0 records in 00:06:57.303 256+0 records out 00:06:57.303 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0465644 s, 22.5 MB/s 00:06:57.303 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:57.303 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:57.303 256+0 records in 00:06:57.303 256+0 records out 00:06:57.303 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0475979 s, 22.0 MB/s 00:06:57.303 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:57.303 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:57.303 256+0 records in 00:06:57.303 256+0 records out 00:06:57.303 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0487737 s, 21.5 MB/s 00:06:57.303 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:57.303 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:57.303 256+0 records in 00:06:57.303 256+0 records out 00:06:57.303 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0462231 s, 22.7 MB/s 00:06:57.303 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:57.303 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:57.303 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:57.303 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:57.303 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:57.303 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:57.303 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:57.303 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:57.303 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:57.303 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:57.303 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:57.303 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:57.303 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:57.303 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:57.303 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:57.303 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:57.303 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:57.303 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:57.303 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:57.303 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:57.303 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:57.303 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.303 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:57.303 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:57.303 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:57.303 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.303 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:57.562 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:57.562 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:57.562 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:57.562 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.562 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.562 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:57.562 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:57.562 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.562 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.562 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:57.820 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:57.820 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:57.820 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:57.821 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.821 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.821 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:57.821 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:57.821 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.821 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.821 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:58.079 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:58.079 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:58.079 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:58.079 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.079 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.079 21:39:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:58.079 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:58.079 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.079 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.079 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:58.337 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:58.337 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:58.337 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:58.337 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.337 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.337 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:58.337 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:58.337 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.337 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.337 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:58.337 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:58.337 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:58.337 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:58.337 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.337 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.337 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:58.337 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:58.337 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.337 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.337 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:58.595 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:58.595 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:58.595 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:58.595 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.595 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.595 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:58.595 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:58.595 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.595 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:58.595 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.595 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:58.854 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:58.854 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:58.854 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:58.854 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:58.854 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:58.854 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:58.854 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:58.854 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:58.854 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:58.854 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:58.854 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:58.854 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:58.854 21:39:21 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:58.854 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.854 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:58.854 21:39:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:59.113 malloc_lvol_verify 00:06:59.113 21:39:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:59.371 89ef3057-ba4c-4dc6-990e-b8c79bacfb52 00:06:59.371 21:39:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:59.371 686a2cbd-6f75-449e-98a2-5980de179679 00:06:59.371 21:39:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:59.630 /dev/nbd0 00:06:59.630 21:39:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:59.630 21:39:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:59.630 21:39:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:59.630 21:39:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:59.630 21:39:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:59.630 mke2fs 1.47.0 (5-Feb-2023) 00:06:59.630 Discarding device blocks: 0/4096 done 00:06:59.630 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:59.630 00:06:59.630 Allocating group tables: 0/1 done 00:06:59.630 Writing inode tables: 0/1 done 00:06:59.630 Creating journal (1024 blocks): done 00:06:59.630 Writing superblocks and filesystem accounting information: 0/1 done 00:06:59.630 00:06:59.630 21:39:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:59.630 21:39:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.630 21:39:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:59.630 21:39:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:59.630 21:39:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:59.630 21:39:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.630 21:39:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:59.889 21:39:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:59.889 21:39:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:59.889 21:39:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:59.889 21:39:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.889 21:39:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.889 21:39:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:59.889 21:39:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:59.889 21:39:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.889 21:39:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 71561 00:06:59.889 21:39:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 71561 ']' 00:06:59.889 21:39:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 71561 00:06:59.889 21:39:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:59.889 21:39:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.889 21:39:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71561 00:06:59.889 21:39:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:59.889 21:39:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:59.889 killing process with pid 71561 00:06:59.889 21:39:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71561' 00:06:59.889 21:39:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 71561 00:06:59.889 21:39:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 71561 00:07:00.150 21:39:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:00.150 00:07:00.150 real 0m8.956s 00:07:00.150 user 0m13.216s 00:07:00.150 sys 0m3.066s 00:07:00.150 21:39:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.150 ************************************ 00:07:00.150 END TEST bdev_nbd 00:07:00.150 ************************************ 00:07:00.150 21:39:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:00.150 21:39:23 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:07:00.150 21:39:23 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:07:00.150 skipping fio tests on NVMe due to multi-ns failures. 00:07:00.150 21:39:23 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:00.150 21:39:23 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:00.150 21:39:23 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:00.150 21:39:23 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:00.150 21:39:23 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.150 21:39:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:00.150 ************************************ 00:07:00.150 START TEST bdev_verify 00:07:00.150 ************************************ 00:07:00.150 21:39:23 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:00.150 [2024-11-27 21:39:23.182072] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:00.150 [2024-11-27 21:39:23.182184] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71928 ] 00:07:00.409 [2024-11-27 21:39:23.322948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:00.409 [2024-11-27 21:39:23.340880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.409 [2024-11-27 21:39:23.340897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.668 Running I/O for 5 seconds... 00:07:02.994 19392.00 IOPS, 75.75 MiB/s [2024-11-27T21:39:27.058Z] 20864.00 IOPS, 81.50 MiB/s [2024-11-27T21:39:28.043Z] 20736.00 IOPS, 81.00 MiB/s [2024-11-27T21:39:28.985Z] 20464.00 IOPS, 79.94 MiB/s [2024-11-27T21:39:28.985Z] 20224.00 IOPS, 79.00 MiB/s 00:07:05.864 Latency(us) 00:07:05.864 [2024-11-27T21:39:28.985Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:05.864 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:05.865 Verification LBA range: start 0x0 length 0xbd0bd 00:07:05.865 Nvme0n1 : 5.07 1665.30 6.51 0.00 0.00 76686.90 14619.57 86305.87 00:07:05.865 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:05.865 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:05.865 Nvme0n1 : 5.07 1665.65 6.51 0.00 0.00 76681.34 13712.15 88725.66 00:07:05.865 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:05.865 Verification LBA range: start 0x0 length 0xa0000 00:07:05.865 Nvme1n1 : 5.08 1663.60 6.50 0.00 0.00 76325.45 17039.36 64931.05 00:07:05.865 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:05.865 Verification LBA range: start 0xa0000 length 0xa0000 00:07:05.865 Nvme1n1 : 5.07 1665.17 6.50 0.00 0.00 76615.27 13913.80 82272.89 00:07:05.865 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:05.865 Verification LBA range: start 0x0 length 0x80000 00:07:05.865 Nvme2n1 : 5.08 1662.08 6.49 0.00 0.00 76199.62 18854.20 68560.74 00:07:05.865 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:05.865 Verification LBA range: start 0x80000 length 0x80000 00:07:05.865 Nvme2n1 : 5.08 1664.16 6.50 0.00 0.00 76318.37 15526.99 68560.74 00:07:05.865 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:05.865 Verification LBA range: start 0x0 length 0x80000 00:07:05.865 Nvme2n2 : 5.09 1660.77 6.49 0.00 0.00 76075.00 19963.27 69367.34 00:07:05.865 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:05.865 Verification LBA range: start 0x80000 length 0x80000 00:07:05.865 Nvme2n2 : 5.08 1663.59 6.50 0.00 0.00 76123.80 15224.52 62914.56 00:07:05.865 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:05.865 Verification LBA range: start 0x0 length 0x80000 00:07:05.865 Nvme2n3 : 5.09 1659.55 6.48 0.00 0.00 75934.91 13409.67 71787.13 00:07:05.865 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:05.865 Verification LBA range: start 0x80000 length 0x80000 00:07:05.865 Nvme2n3 : 5.08 1662.10 6.49 0.00 0.00 75988.78 16938.54 65737.65 00:07:05.865 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:05.865 Verification LBA range: start 0x0 length 0x20000 00:07:05.865 Nvme3n1 : 5.10 1669.52 6.52 0.00 0.00 75451.33 3302.01 72593.72 00:07:05.865 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:05.865 Verification LBA range: start 0x20000 length 0x20000 00:07:05.865 Nvme3n1 : 5.09 1660.80 6.49 0.00 0.00 75858.05 12603.08 66140.95 00:07:05.865 [2024-11-27T21:39:28.986Z] =================================================================================================================== 00:07:05.865 [2024-11-27T21:39:28.986Z] Total : 19962.28 77.98 0.00 0.00 76187.77 3302.01 88725.66 00:07:06.432 00:07:06.432 real 0m6.277s 00:07:06.432 user 0m11.896s 00:07:06.432 sys 0m0.190s 00:07:06.432 ************************************ 00:07:06.432 END TEST bdev_verify 00:07:06.432 ************************************ 00:07:06.432 21:39:29 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.432 21:39:29 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:06.432 21:39:29 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:06.432 21:39:29 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:06.432 21:39:29 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.432 21:39:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:06.432 ************************************ 00:07:06.432 START TEST bdev_verify_big_io 00:07:06.432 ************************************ 00:07:06.432 21:39:29 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:06.432 [2024-11-27 21:39:29.506610] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:06.432 [2024-11-27 21:39:29.506719] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72015 ] 00:07:06.691 [2024-11-27 21:39:29.652296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:06.691 [2024-11-27 21:39:29.673910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.691 [2024-11-27 21:39:29.673992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.262 Running I/O for 5 seconds... 00:07:12.466 1452.00 IOPS, 90.75 MiB/s [2024-11-27T21:39:36.154Z] 2268.00 IOPS, 141.75 MiB/s [2024-11-27T21:39:36.413Z] 2837.67 IOPS, 177.35 MiB/s 00:07:13.292 Latency(us) 00:07:13.292 [2024-11-27T21:39:36.413Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:13.292 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:13.292 Verification LBA range: start 0x0 length 0xbd0b 00:07:13.292 Nvme0n1 : 5.56 120.09 7.51 0.00 0.00 1015554.05 14518.74 1155046.79 00:07:13.292 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:13.292 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:13.292 Nvme0n1 : 5.77 110.92 6.93 0.00 0.00 1102375.46 13006.38 1245385.65 00:07:13.292 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:13.292 Verification LBA range: start 0x0 length 0xa000 00:07:13.292 Nvme1n1 : 5.79 127.86 7.99 0.00 0.00 930448.25 77836.60 955010.76 00:07:13.292 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:13.292 Verification LBA range: start 0xa000 length 0xa000 00:07:13.292 Nvme1n1 : 5.88 105.35 6.58 0.00 0.00 1117101.88 99614.72 1639004.95 00:07:13.292 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:13.292 Verification LBA range: start 0x0 length 0x8000 00:07:13.292 Nvme2n1 : 5.79 127.84 7.99 0.00 0.00 897774.80 78239.90 1051802.39 00:07:13.292 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:13.292 Verification LBA range: start 0x8000 length 0x8000 00:07:13.292 Nvme2n1 : 5.99 110.02 6.88 0.00 0.00 1031191.02 52025.50 1677721.60 00:07:13.292 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:13.292 Verification LBA range: start 0x0 length 0x8000 00:07:13.292 Nvme2n2 : 5.87 135.02 8.44 0.00 0.00 826901.45 82272.89 948557.98 00:07:13.292 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:13.292 Verification LBA range: start 0x8000 length 0x8000 00:07:13.292 Nvme2n2 : 5.99 114.64 7.17 0.00 0.00 959774.32 60898.07 1703532.70 00:07:13.292 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:13.292 Verification LBA range: start 0x0 length 0x8000 00:07:13.292 Nvme2n3 : 5.97 146.47 9.15 0.00 0.00 739627.00 26819.35 974369.08 00:07:13.292 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:13.292 Verification LBA range: start 0x8000 length 0x8000 00:07:13.292 Nvme2n3 : 6.02 124.49 7.78 0.00 0.00 852512.00 10233.70 1742249.35 00:07:13.292 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:13.292 Verification LBA range: start 0x0 length 0x2000 00:07:13.292 Nvme3n1 : 6.00 165.85 10.37 0.00 0.00 634939.70 775.09 1187310.67 00:07:13.292 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:13.292 Verification LBA range: start 0x2000 length 0x2000 00:07:13.292 Nvme3n1 : 6.13 201.45 12.59 0.00 0.00 513064.13 261.51 1780966.01 00:07:13.292 [2024-11-27T21:39:36.413Z] =================================================================================================================== 00:07:13.292 [2024-11-27T21:39:36.413Z] Total : 1590.03 99.38 0.00 0.00 848349.21 261.51 1780966.01 00:07:14.668 00:07:14.668 real 0m8.085s 00:07:14.668 user 0m15.497s 00:07:14.668 sys 0m0.210s 00:07:14.668 21:39:37 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.668 21:39:37 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:14.668 ************************************ 00:07:14.668 END TEST bdev_verify_big_io 00:07:14.668 ************************************ 00:07:14.668 21:39:37 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:14.668 21:39:37 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:14.668 21:39:37 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.668 21:39:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:14.668 ************************************ 00:07:14.668 START TEST bdev_write_zeroes 00:07:14.668 ************************************ 00:07:14.668 21:39:37 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:14.668 [2024-11-27 21:39:37.642880] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:14.668 [2024-11-27 21:39:37.642989] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72113 ] 00:07:14.668 [2024-11-27 21:39:37.783557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.927 [2024-11-27 21:39:37.799872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.184 Running I/O for 1 seconds... 00:07:16.124 75648.00 IOPS, 295.50 MiB/s 00:07:16.124 Latency(us) 00:07:16.124 [2024-11-27T21:39:39.245Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:16.124 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:16.124 Nvme0n1 : 1.02 12587.26 49.17 0.00 0.00 10150.50 4562.31 21273.99 00:07:16.124 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:16.124 Nvme1n1 : 1.02 12572.45 49.11 0.00 0.00 10151.35 6755.25 19257.50 00:07:16.124 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:16.124 Nvme2n1 : 1.02 12558.17 49.06 0.00 0.00 10141.64 6604.01 18551.73 00:07:16.124 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:16.124 Nvme2n2 : 1.02 12544.03 49.00 0.00 0.00 10129.56 6604.01 18148.43 00:07:16.124 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:16.124 Nvme2n3 : 1.02 12529.92 48.94 0.00 0.00 10123.77 6654.42 18249.26 00:07:16.124 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:16.124 Nvme3n1 : 1.02 12515.68 48.89 0.00 0.00 10112.14 6553.60 19761.62 00:07:16.124 [2024-11-27T21:39:39.245Z] =================================================================================================================== 00:07:16.124 [2024-11-27T21:39:39.245Z] Total : 75307.51 294.17 0.00 0.00 10134.83 4562.31 21273.99 00:07:16.384 00:07:16.384 real 0m1.775s 00:07:16.384 user 0m1.529s 00:07:16.385 sys 0m0.135s 00:07:16.385 ************************************ 00:07:16.385 END TEST bdev_write_zeroes 00:07:16.385 ************************************ 00:07:16.385 21:39:39 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.385 21:39:39 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:16.385 21:39:39 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:16.385 21:39:39 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:16.385 21:39:39 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.385 21:39:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:16.385 ************************************ 00:07:16.385 START TEST bdev_json_nonenclosed 00:07:16.385 ************************************ 00:07:16.385 21:39:39 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:16.385 [2024-11-27 21:39:39.474644] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:16.385 [2024-11-27 21:39:39.474748] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72144 ] 00:07:16.645 [2024-11-27 21:39:39.618386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.645 [2024-11-27 21:39:39.637379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.645 [2024-11-27 21:39:39.637455] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:16.645 [2024-11-27 21:39:39.637472] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:16.645 [2024-11-27 21:39:39.637483] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:16.645 00:07:16.645 real 0m0.278s 00:07:16.645 user 0m0.103s 00:07:16.645 sys 0m0.073s 00:07:16.645 21:39:39 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.645 ************************************ 00:07:16.645 END TEST bdev_json_nonenclosed 00:07:16.645 ************************************ 00:07:16.645 21:39:39 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:16.645 21:39:39 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:16.645 21:39:39 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:16.645 21:39:39 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.645 21:39:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:16.645 ************************************ 00:07:16.645 START TEST bdev_json_nonarray 00:07:16.645 ************************************ 00:07:16.645 21:39:39 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:16.907 [2024-11-27 21:39:39.812016] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:16.907 [2024-11-27 21:39:39.812124] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72177 ] 00:07:16.907 [2024-11-27 21:39:39.958880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.907 [2024-11-27 21:39:39.977797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.907 [2024-11-27 21:39:39.977879] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:16.907 [2024-11-27 21:39:39.977896] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:16.907 [2024-11-27 21:39:39.977907] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:17.168 00:07:17.168 real 0m0.283s 00:07:17.168 user 0m0.101s 00:07:17.168 sys 0m0.079s 00:07:17.168 21:39:40 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.168 ************************************ 00:07:17.168 END TEST bdev_json_nonarray 00:07:17.168 ************************************ 00:07:17.168 21:39:40 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:17.168 21:39:40 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:07:17.168 21:39:40 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:07:17.168 21:39:40 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:07:17.168 21:39:40 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:07:17.168 21:39:40 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:07:17.168 21:39:40 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:17.168 21:39:40 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:17.168 21:39:40 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:17.168 21:39:40 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:17.168 21:39:40 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:17.168 21:39:40 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:17.168 00:07:17.168 real 0m30.429s 00:07:17.168 user 0m48.682s 00:07:17.168 sys 0m5.099s 00:07:17.168 ************************************ 00:07:17.168 END TEST blockdev_nvme 00:07:17.168 ************************************ 00:07:17.168 21:39:40 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.168 21:39:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:17.168 21:39:40 -- spdk/autotest.sh@209 -- # uname -s 00:07:17.168 21:39:40 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:17.168 21:39:40 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:17.168 21:39:40 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:17.168 21:39:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.168 21:39:40 -- common/autotest_common.sh@10 -- # set +x 00:07:17.168 ************************************ 00:07:17.168 START TEST blockdev_nvme_gpt 00:07:17.168 ************************************ 00:07:17.168 21:39:40 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:17.168 * Looking for test storage... 00:07:17.168 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:17.168 21:39:40 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:17.168 21:39:40 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:07:17.168 21:39:40 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:17.168 21:39:40 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:17.168 21:39:40 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:17.168 21:39:40 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:17.168 21:39:40 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:17.168 21:39:40 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:17.168 21:39:40 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:17.168 21:39:40 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:17.168 21:39:40 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:17.168 21:39:40 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:17.168 21:39:40 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:17.168 21:39:40 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:17.168 21:39:40 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:17.168 21:39:40 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:17.168 21:39:40 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:17.168 21:39:40 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:17.168 21:39:40 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:17.168 21:39:40 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:17.168 21:39:40 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:17.168 21:39:40 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:17.168 21:39:40 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:17.168 21:39:40 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:17.428 21:39:40 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:17.428 21:39:40 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:17.428 21:39:40 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:17.428 21:39:40 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:17.428 21:39:40 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:17.428 21:39:40 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:17.428 21:39:40 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:17.428 21:39:40 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:17.428 21:39:40 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:17.428 21:39:40 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:17.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.428 --rc genhtml_branch_coverage=1 00:07:17.428 --rc genhtml_function_coverage=1 00:07:17.428 --rc genhtml_legend=1 00:07:17.428 --rc geninfo_all_blocks=1 00:07:17.428 --rc geninfo_unexecuted_blocks=1 00:07:17.428 00:07:17.428 ' 00:07:17.428 21:39:40 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:17.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.428 --rc genhtml_branch_coverage=1 00:07:17.428 --rc genhtml_function_coverage=1 00:07:17.428 --rc genhtml_legend=1 00:07:17.428 --rc geninfo_all_blocks=1 00:07:17.428 --rc geninfo_unexecuted_blocks=1 00:07:17.428 00:07:17.428 ' 00:07:17.428 21:39:40 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:17.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.428 --rc genhtml_branch_coverage=1 00:07:17.428 --rc genhtml_function_coverage=1 00:07:17.428 --rc genhtml_legend=1 00:07:17.428 --rc geninfo_all_blocks=1 00:07:17.428 --rc geninfo_unexecuted_blocks=1 00:07:17.428 00:07:17.428 ' 00:07:17.428 21:39:40 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:17.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.428 --rc genhtml_branch_coverage=1 00:07:17.428 --rc genhtml_function_coverage=1 00:07:17.428 --rc genhtml_legend=1 00:07:17.428 --rc geninfo_all_blocks=1 00:07:17.428 --rc geninfo_unexecuted_blocks=1 00:07:17.428 00:07:17.428 ' 00:07:17.428 21:39:40 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:17.428 21:39:40 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:17.428 21:39:40 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:17.428 21:39:40 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:17.428 21:39:40 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:17.428 21:39:40 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:17.428 21:39:40 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:17.429 21:39:40 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:17.429 21:39:40 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:17.429 21:39:40 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:07:17.429 21:39:40 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:07:17.429 21:39:40 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:07:17.429 21:39:40 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:07:17.429 21:39:40 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:07:17.429 21:39:40 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:07:17.429 21:39:40 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:07:17.429 21:39:40 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:07:17.429 21:39:40 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:07:17.429 21:39:40 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:07:17.429 21:39:40 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:07:17.429 21:39:40 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:07:17.429 21:39:40 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:07:17.429 21:39:40 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:07:17.429 21:39:40 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:07:17.429 21:39:40 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72250 00:07:17.429 21:39:40 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:17.429 21:39:40 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 72250 00:07:17.429 21:39:40 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 72250 ']' 00:07:17.429 21:39:40 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.429 21:39:40 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:17.429 21:39:40 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:17.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.429 21:39:40 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.429 21:39:40 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:17.429 21:39:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:17.429 [2024-11-27 21:39:40.375765] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:17.429 [2024-11-27 21:39:40.375890] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72250 ] 00:07:17.429 [2024-11-27 21:39:40.517468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.429 [2024-11-27 21:39:40.533956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.366 21:39:41 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.366 21:39:41 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:07:18.366 21:39:41 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:07:18.366 21:39:41 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:07:18.366 21:39:41 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:18.625 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:18.626 Waiting for block devices as requested 00:07:18.626 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:18.626 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:18.884 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:18.884 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:24.163 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:24.163 21:39:46 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:24.163 21:39:46 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:24.163 21:39:46 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:24.163 21:39:46 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:07:24.163 21:39:46 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:24.163 21:39:46 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:07:24.163 21:39:46 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:24.163 21:39:46 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:24.163 21:39:46 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:24.163 21:39:46 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:24.163 21:39:46 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:07:24.163 21:39:46 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:07:24.163 21:39:46 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:24.163 21:39:46 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:24.163 21:39:46 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:24.163 21:39:46 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:07:24.163 21:39:46 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:07:24.163 21:39:46 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:24.163 21:39:46 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:24.163 21:39:46 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:24.163 21:39:46 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:07:24.163 21:39:46 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:07:24.163 21:39:46 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:24.163 21:39:46 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:24.163 21:39:46 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:24.163 21:39:46 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:07:24.163 21:39:46 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:07:24.163 21:39:46 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:24.163 21:39:46 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:24.163 21:39:46 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:24.163 21:39:46 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:07:24.163 21:39:46 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:07:24.163 21:39:46 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:24.163 21:39:46 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:24.163 21:39:46 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:24.163 21:39:46 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:07:24.163 21:39:46 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:07:24.163 21:39:46 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:24.163 21:39:46 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:24.163 21:39:46 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:24.163 21:39:46 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:24.163 21:39:46 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:24.163 21:39:46 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:24.163 21:39:46 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:24.163 21:39:46 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:24.163 21:39:46 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:24.163 21:39:46 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:24.163 BYT; 00:07:24.163 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:24.163 21:39:46 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:24.163 BYT; 00:07:24.163 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:24.163 21:39:46 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:24.163 21:39:46 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:24.163 21:39:46 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:24.163 21:39:46 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:24.163 21:39:46 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:24.163 21:39:46 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:24.163 21:39:46 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:24.163 21:39:46 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:24.163 21:39:46 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:24.163 21:39:46 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:24.163 21:39:46 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:24.163 21:39:46 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:24.163 21:39:46 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:24.163 21:39:46 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:24.163 21:39:46 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:24.163 21:39:46 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:24.163 21:39:46 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:24.163 21:39:46 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:24.163 21:39:46 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:24.163 21:39:46 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:24.163 21:39:46 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:24.163 21:39:46 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:24.163 21:39:46 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:24.163 21:39:46 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:24.163 21:39:46 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:24.163 21:39:46 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:24.163 21:39:46 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:24.163 21:39:46 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:24.163 21:39:46 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:25.096 The operation has completed successfully. 00:07:25.096 21:39:48 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:26.028 The operation has completed successfully. 00:07:26.028 21:39:49 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:26.598 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:26.857 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:26.857 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:26.857 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:26.857 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:27.115 21:39:49 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:27.115 21:39:49 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.115 21:39:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:27.115 [] 00:07:27.115 21:39:50 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.115 21:39:50 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:27.115 21:39:50 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:27.115 21:39:50 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:27.115 21:39:50 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:27.115 21:39:50 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:27.115 21:39:50 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.115 21:39:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:27.373 21:39:50 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.373 21:39:50 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:07:27.373 21:39:50 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.373 21:39:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:27.373 21:39:50 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.373 21:39:50 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:07:27.373 21:39:50 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:07:27.373 21:39:50 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.373 21:39:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:27.373 21:39:50 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.373 21:39:50 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:07:27.373 21:39:50 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.373 21:39:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:27.373 21:39:50 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.373 21:39:50 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:27.373 21:39:50 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.373 21:39:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:27.373 21:39:50 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.373 21:39:50 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:07:27.373 21:39:50 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:07:27.373 21:39:50 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.373 21:39:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:27.373 21:39:50 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:07:27.373 21:39:50 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.373 21:39:50 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:07:27.373 21:39:50 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:07:27.374 21:39:50 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "d5292daa-e607-4c91-b9f1-ae1798bc5b46"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "d5292daa-e607-4c91-b9f1-ae1798bc5b46",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "5d8df10c-7053-4901-9f1a-a0182cf93ebf"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5d8df10c-7053-4901-9f1a-a0182cf93ebf",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "33cfd42e-15b8-446e-a3b6-258b99defd8f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "33cfd42e-15b8-446e-a3b6-258b99defd8f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "026a557d-c9ef-4aaf-b97e-eed6319cf091"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "026a557d-c9ef-4aaf-b97e-eed6319cf091",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "2aa0c899-c4cc-4949-bf9d-06e0346cf1e5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "2aa0c899-c4cc-4949-bf9d-06e0346cf1e5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:27.374 21:39:50 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:07:27.374 21:39:50 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:07:27.374 21:39:50 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:07:27.374 21:39:50 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 72250 00:07:27.374 21:39:50 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 72250 ']' 00:07:27.374 21:39:50 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 72250 00:07:27.374 21:39:50 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:07:27.374 21:39:50 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:27.374 21:39:50 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72250 00:07:27.633 killing process with pid 72250 00:07:27.633 21:39:50 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:27.633 21:39:50 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:27.633 21:39:50 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72250' 00:07:27.633 21:39:50 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 72250 00:07:27.633 21:39:50 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 72250 00:07:27.633 21:39:50 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:27.633 21:39:50 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:27.633 21:39:50 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:27.633 21:39:50 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.633 21:39:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:27.633 ************************************ 00:07:27.633 START TEST bdev_hello_world 00:07:27.633 ************************************ 00:07:27.633 21:39:50 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:27.892 [2024-11-27 21:39:50.801811] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:27.892 [2024-11-27 21:39:50.801919] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72861 ] 00:07:27.892 [2024-11-27 21:39:50.942370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.892 [2024-11-27 21:39:50.958958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.460 [2024-11-27 21:39:51.316698] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:28.460 [2024-11-27 21:39:51.316738] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:28.460 [2024-11-27 21:39:51.316752] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:28.461 [2024-11-27 21:39:51.318446] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:28.461 [2024-11-27 21:39:51.318829] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:28.461 [2024-11-27 21:39:51.318857] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:28.461 [2024-11-27 21:39:51.319010] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:28.461 00:07:28.461 [2024-11-27 21:39:51.319033] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:28.461 00:07:28.461 real 0m0.699s 00:07:28.461 user 0m0.471s 00:07:28.461 sys 0m0.127s 00:07:28.461 21:39:51 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.461 21:39:51 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:28.461 ************************************ 00:07:28.461 END TEST bdev_hello_world 00:07:28.461 ************************************ 00:07:28.461 21:39:51 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:07:28.461 21:39:51 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:28.461 21:39:51 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.461 21:39:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:28.461 ************************************ 00:07:28.461 START TEST bdev_bounds 00:07:28.461 ************************************ 00:07:28.461 21:39:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:28.461 21:39:51 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72886 00:07:28.461 21:39:51 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:28.461 Process bdevio pid: 72886 00:07:28.461 21:39:51 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72886' 00:07:28.461 21:39:51 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72886 00:07:28.461 21:39:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72886 ']' 00:07:28.461 21:39:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.461 21:39:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.461 21:39:51 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:28.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.461 21:39:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.461 21:39:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.461 21:39:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:28.461 [2024-11-27 21:39:51.567102] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:28.461 [2024-11-27 21:39:51.567220] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72886 ] 00:07:28.721 [2024-11-27 21:39:51.707256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:28.721 [2024-11-27 21:39:51.728468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.721 [2024-11-27 21:39:51.728676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.721 [2024-11-27 21:39:51.728683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.292 21:39:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.292 21:39:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:29.292 21:39:52 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:29.553 I/O targets: 00:07:29.553 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:29.553 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:29.553 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:29.553 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:29.553 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:29.553 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:29.553 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:29.553 00:07:29.553 00:07:29.553 CUnit - A unit testing framework for C - Version 2.1-3 00:07:29.553 http://cunit.sourceforge.net/ 00:07:29.553 00:07:29.553 00:07:29.553 Suite: bdevio tests on: Nvme3n1 00:07:29.553 Test: blockdev write read block ...passed 00:07:29.553 Test: blockdev write zeroes read block ...passed 00:07:29.553 Test: blockdev write zeroes read no split ...passed 00:07:29.553 Test: blockdev write zeroes read split ...passed 00:07:29.553 Test: blockdev write zeroes read split partial ...passed 00:07:29.553 Test: blockdev reset ...[2024-11-27 21:39:52.506569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:29.553 [2024-11-27 21:39:52.509700] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:29.553 passed 00:07:29.553 Test: blockdev write read 8 blocks ...passed 00:07:29.553 Test: blockdev write read size > 128k ...passed 00:07:29.553 Test: blockdev write read invalid size ...passed 00:07:29.553 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:29.553 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:29.553 Test: blockdev write read max offset ...passed 00:07:29.553 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:29.553 Test: blockdev writev readv 8 blocks ...passed 00:07:29.553 Test: blockdev writev readv 30 x 1block ...passed 00:07:29.553 Test: blockdev writev readv block ...passed 00:07:29.553 Test: blockdev writev readv size > 128k ...passed 00:07:29.553 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:29.553 Test: blockdev comparev and writev ...[2024-11-27 21:39:52.526996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b760e000 len:0x1000 00:07:29.553 [2024-11-27 21:39:52.527044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:29.553 passed 00:07:29.553 Test: blockdev nvme passthru rw ...passed 00:07:29.553 Test: blockdev nvme passthru vendor specific ...passed 00:07:29.553 Test: blockdev nvme admin passthru ...[2024-11-27 21:39:52.529537] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:29.553 [2024-11-27 21:39:52.529571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:29.553 passed 00:07:29.553 Test: blockdev copy ...passed 00:07:29.553 Suite: bdevio tests on: Nvme2n3 00:07:29.553 Test: blockdev write read block ...passed 00:07:29.553 Test: blockdev write zeroes read block ...passed 00:07:29.553 Test: blockdev write zeroes read no split ...passed 00:07:29.553 Test: blockdev write zeroes read split ...passed 00:07:29.553 Test: blockdev write zeroes read split partial ...passed 00:07:29.553 Test: blockdev reset ...[2024-11-27 21:39:52.557094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:29.553 [2024-11-27 21:39:52.559174] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:07:29.553 Test: blockdev write read 8 blocks ...uccessful. 00:07:29.553 passed 00:07:29.553 Test: blockdev write read size > 128k ...passed 00:07:29.553 Test: blockdev write read invalid size ...passed 00:07:29.553 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:29.553 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:29.553 Test: blockdev write read max offset ...passed 00:07:29.553 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:29.553 Test: blockdev writev readv 8 blocks ...passed 00:07:29.553 Test: blockdev writev readv 30 x 1block ...passed 00:07:29.553 Test: blockdev writev readv block ...passed 00:07:29.553 Test: blockdev writev readv size > 128k ...passed 00:07:29.553 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:29.553 Test: blockdev comparev and writev ...[2024-11-27 21:39:52.574622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b7608000 len:0x1000 00:07:29.554 [2024-11-27 21:39:52.574661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:29.554 passed 00:07:29.554 Test: blockdev nvme passthru rw ...passed 00:07:29.554 Test: blockdev nvme passthru vendor specific ...passed 00:07:29.554 Test: blockdev nvme admin passthru ...[2024-11-27 21:39:52.576590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:29.554 [2024-11-27 21:39:52.576620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:29.554 passed 00:07:29.554 Test: blockdev copy ...passed 00:07:29.554 Suite: bdevio tests on: Nvme2n2 00:07:29.554 Test: blockdev write read block ...passed 00:07:29.554 Test: blockdev write zeroes read block ...passed 00:07:29.554 Test: blockdev write zeroes read no split ...passed 00:07:29.554 Test: blockdev write zeroes read split ...passed 00:07:29.554 Test: blockdev write zeroes read split partial ...passed 00:07:29.554 Test: blockdev reset ...[2024-11-27 21:39:52.594716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:29.554 [2024-11-27 21:39:52.597604] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:07:29.554 Test: blockdev write read 8 blocks ...uccessful. 00:07:29.554 passed 00:07:29.554 Test: blockdev write read size > 128k ...passed 00:07:29.554 Test: blockdev write read invalid size ...passed 00:07:29.554 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:29.554 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:29.554 Test: blockdev write read max offset ...passed 00:07:29.554 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:29.554 Test: blockdev writev readv 8 blocks ...passed 00:07:29.554 Test: blockdev writev readv 30 x 1block ...passed 00:07:29.554 Test: blockdev writev readv block ...passed 00:07:29.554 Test: blockdev writev readv size > 128k ...passed 00:07:29.554 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:29.554 Test: blockdev comparev and writev ...[2024-11-27 21:39:52.613530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b7602000 len:0x1000 00:07:29.554 [2024-11-27 21:39:52.613566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:29.554 passed 00:07:29.554 Test: blockdev nvme passthru rw ...passed 00:07:29.554 Test: blockdev nvme passthru vendor specific ...[2024-11-27 21:39:52.616122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:07:29.554 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:07:29.554 [2024-11-27 21:39:52.616229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:29.554 passed 00:07:29.554 Test: blockdev copy ...passed 00:07:29.554 Suite: bdevio tests on: Nvme2n1 00:07:29.554 Test: blockdev write read block ...passed 00:07:29.554 Test: blockdev write zeroes read block ...passed 00:07:29.554 Test: blockdev write zeroes read no split ...passed 00:07:29.554 Test: blockdev write zeroes read split ...passed 00:07:29.554 Test: blockdev write zeroes read split partial ...passed 00:07:29.554 Test: blockdev reset ...[2024-11-27 21:39:52.644267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:29.554 [2024-11-27 21:39:52.649367] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:29.554 passed 00:07:29.554 Test: blockdev write read 8 blocks ...passed 00:07:29.554 Test: blockdev write read size > 128k ...passed 00:07:29.554 Test: blockdev write read invalid size ...passed 00:07:29.554 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:29.554 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:29.554 Test: blockdev write read max offset ...passed 00:07:29.554 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:29.554 Test: blockdev writev readv 8 blocks ...passed 00:07:29.554 Test: blockdev writev readv 30 x 1block ...passed 00:07:29.554 Test: blockdev writev readv block ...passed 00:07:29.554 Test: blockdev writev readv size > 128k ...passed 00:07:29.554 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:29.554 Test: blockdev comparev and writev ...[2024-11-27 21:39:52.664968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b7a04000 len:0x1000 00:07:29.554 [2024-11-27 21:39:52.665005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:29.554 passed 00:07:29.554 Test: blockdev nvme passthru rw ...passed 00:07:29.554 Test: blockdev nvme passthru vendor specific ...passed 00:07:29.554 Test: blockdev nvme admin passthru ...[2024-11-27 21:39:52.667583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:29.554 [2024-11-27 21:39:52.667615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:29.815 passed 00:07:29.815 Test: blockdev copy ...passed 00:07:29.815 Suite: bdevio tests on: Nvme1n1p2 00:07:29.815 Test: blockdev write read block ...passed 00:07:29.815 Test: blockdev write zeroes read block ...passed 00:07:29.815 Test: blockdev write zeroes read no split ...passed 00:07:29.815 Test: blockdev write zeroes read split ...passed 00:07:29.815 Test: blockdev write zeroes read split partial ...passed 00:07:29.815 Test: blockdev reset ...[2024-11-27 21:39:52.698825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:29.815 [2024-11-27 21:39:52.701933] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:29.815 passed 00:07:29.815 Test: blockdev write read 8 blocks ...passed 00:07:29.815 Test: blockdev write read size > 128k ...passed 00:07:29.815 Test: blockdev write read invalid size ...passed 00:07:29.815 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:29.815 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:29.815 Test: blockdev write read max offset ...passed 00:07:29.815 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:29.815 Test: blockdev writev readv 8 blocks ...passed 00:07:29.815 Test: blockdev writev readv 30 x 1block ...passed 00:07:29.815 Test: blockdev writev readv block ...passed 00:07:29.815 Test: blockdev writev readv size > 128k ...passed 00:07:29.815 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:29.815 Test: blockdev comparev and writev ...[2024-11-27 21:39:52.717518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2d4a3d000 len:0x1000 00:07:29.815 [2024-11-27 21:39:52.717554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:29.815 passed 00:07:29.815 Test: blockdev nvme passthru rw ...passed 00:07:29.815 Test: blockdev nvme passthru vendor specific ...passed 00:07:29.815 Test: blockdev nvme admin passthru ...passed 00:07:29.815 Test: blockdev copy ...passed 00:07:29.815 Suite: bdevio tests on: Nvme1n1p1 00:07:29.815 Test: blockdev write read block ...passed 00:07:29.815 Test: blockdev write zeroes read block ...passed 00:07:29.815 Test: blockdev write zeroes read no split ...passed 00:07:29.815 Test: blockdev write zeroes read split ...passed 00:07:29.815 Test: blockdev write zeroes read split partial ...passed 00:07:29.815 Test: blockdev reset ...[2024-11-27 21:39:52.748112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:29.815 [2024-11-27 21:39:52.751214] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:07:29.815 Test: blockdev write read 8 blocks ...uccessful. 00:07:29.815 passed 00:07:29.815 Test: blockdev write read size > 128k ...passed 00:07:29.815 Test: blockdev write read invalid size ...passed 00:07:29.816 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:29.816 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:29.816 Test: blockdev write read max offset ...passed 00:07:29.816 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:29.816 Test: blockdev writev readv 8 blocks ...passed 00:07:29.816 Test: blockdev writev readv 30 x 1block ...passed 00:07:29.816 Test: blockdev writev readv block ...passed 00:07:29.816 Test: blockdev writev readv size > 128k ...passed 00:07:29.816 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:29.816 Test: blockdev comparev and writev ...[2024-11-27 21:39:52.761048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2d4a39000 len:0x1000 00:07:29.816 [2024-11-27 21:39:52.761088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:29.816 passed 00:07:29.816 Test: blockdev nvme passthru rw ...passed 00:07:29.816 Test: blockdev nvme passthru vendor specific ...passed 00:07:29.816 Test: blockdev nvme admin passthru ...passed 00:07:29.816 Test: blockdev copy ...passed 00:07:29.816 Suite: bdevio tests on: Nvme0n1 00:07:29.816 Test: blockdev write read block ...passed 00:07:29.816 Test: blockdev write zeroes read block ...passed 00:07:29.816 Test: blockdev write zeroes read no split ...passed 00:07:29.816 Test: blockdev write zeroes read split ...passed 00:07:29.816 Test: blockdev write zeroes read split partial ...passed 00:07:29.816 Test: blockdev reset ...[2024-11-27 21:39:52.798079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:29.816 passed 00:07:29.816 Test: blockdev write read 8 blocks ...[2024-11-27 21:39:52.800069] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:29.816 passed 00:07:29.816 Test: blockdev write read size > 128k ...passed 00:07:29.816 Test: blockdev write read invalid size ...passed 00:07:29.816 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:29.816 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:29.816 Test: blockdev write read max offset ...passed 00:07:29.816 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:29.816 Test: blockdev writev readv 8 blocks ...passed 00:07:29.816 Test: blockdev writev readv 30 x 1block ...passed 00:07:29.816 Test: blockdev writev readv block ...passed 00:07:29.816 Test: blockdev writev readv size > 128k ...passed 00:07:29.816 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:29.816 Test: blockdev comparev and writev ...passed 00:07:29.816 Test: blockdev nvme passthru rw ...[2024-11-27 21:39:52.807647] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:29.816 separate metadata which is not supported yet. 00:07:29.816 passed 00:07:29.816 Test: blockdev nvme passthru vendor specific ...[2024-11-27 21:39:52.808467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:29.816 [2024-11-27 21:39:52.808500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:29.816 passed 00:07:29.816 Test: blockdev nvme admin passthru ...passed 00:07:29.816 Test: blockdev copy ...passed 00:07:29.816 00:07:29.816 Run Summary: Type Total Ran Passed Failed Inactive 00:07:29.816 suites 7 7 n/a 0 0 00:07:29.816 tests 161 161 161 0 0 00:07:29.816 asserts 1025 1025 1025 0 n/a 00:07:29.816 00:07:29.816 Elapsed time = 0.697 seconds 00:07:29.816 0 00:07:29.816 21:39:52 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72886 00:07:29.816 21:39:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72886 ']' 00:07:29.816 21:39:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72886 00:07:29.816 21:39:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:29.816 21:39:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:29.816 21:39:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72886 00:07:29.816 21:39:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:29.816 21:39:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:29.816 21:39:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72886' 00:07:29.816 killing process with pid 72886 00:07:29.816 21:39:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72886 00:07:29.816 21:39:52 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72886 00:07:31.726 21:39:54 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:31.726 00:07:31.726 real 0m2.964s 00:07:31.726 user 0m7.967s 00:07:31.726 sys 0m0.281s 00:07:31.726 21:39:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.726 21:39:54 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:31.726 ************************************ 00:07:31.726 END TEST bdev_bounds 00:07:31.726 ************************************ 00:07:31.726 21:39:54 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:31.726 21:39:54 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:31.726 21:39:54 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.726 21:39:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:31.726 ************************************ 00:07:31.726 START TEST bdev_nbd 00:07:31.726 ************************************ 00:07:31.726 21:39:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:31.726 21:39:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:31.726 21:39:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:31.726 21:39:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:31.726 21:39:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:31.726 21:39:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:31.726 21:39:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:31.726 21:39:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:31.726 21:39:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:31.726 21:39:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:31.726 21:39:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:31.726 21:39:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:31.726 21:39:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:31.726 21:39:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:31.726 21:39:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:31.726 21:39:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:31.726 21:39:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72946 00:07:31.726 21:39:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:31.727 21:39:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72946 /var/tmp/spdk-nbd.sock 00:07:31.727 21:39:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72946 ']' 00:07:31.727 21:39:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:31.727 21:39:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:31.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:31.727 21:39:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:31.727 21:39:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:31.727 21:39:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:31.727 21:39:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:31.727 [2024-11-27 21:39:54.587929] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:31.727 [2024-11-27 21:39:54.588032] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:31.727 [2024-11-27 21:39:54.732661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.727 [2024-11-27 21:39:54.751793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.665 21:39:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.665 21:39:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:32.665 21:39:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:32.665 21:39:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.665 21:39:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:32.665 21:39:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:32.665 21:39:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:32.665 21:39:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.665 21:39:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:32.665 21:39:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:32.665 21:39:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:32.665 21:39:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:32.665 21:39:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:32.665 21:39:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:32.665 21:39:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:32.665 21:39:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:32.665 21:39:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:32.665 21:39:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:32.665 21:39:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:32.665 21:39:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:32.665 21:39:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:32.665 21:39:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:32.665 21:39:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:32.665 21:39:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:32.665 21:39:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:32.665 21:39:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:32.665 21:39:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:32.665 1+0 records in 00:07:32.665 1+0 records out 00:07:32.665 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000741165 s, 5.5 MB/s 00:07:32.665 21:39:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:32.665 21:39:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:32.665 21:39:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:32.665 21:39:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:32.665 21:39:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:32.665 21:39:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:32.665 21:39:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:32.665 21:39:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:32.925 21:39:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:32.925 21:39:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:32.925 21:39:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:32.925 21:39:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:32.925 21:39:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:32.925 21:39:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:32.925 21:39:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:32.925 21:39:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:32.925 21:39:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:32.925 21:39:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:32.925 21:39:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:32.925 21:39:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:32.925 1+0 records in 00:07:32.925 1+0 records out 00:07:32.925 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000699504 s, 5.9 MB/s 00:07:32.925 21:39:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:32.925 21:39:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:32.925 21:39:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:32.925 21:39:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:32.925 21:39:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:32.925 21:39:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:32.925 21:39:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:32.925 21:39:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:33.186 21:39:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:33.187 21:39:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:33.187 21:39:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:33.187 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:33.187 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:33.187 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:33.187 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:33.187 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:33.187 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:33.187 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:33.187 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:33.187 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:33.187 1+0 records in 00:07:33.187 1+0 records out 00:07:33.187 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000920575 s, 4.4 MB/s 00:07:33.187 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:33.187 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:33.187 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:33.187 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:33.187 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:33.187 21:39:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:33.187 21:39:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:33.187 21:39:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:33.187 21:39:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:33.187 21:39:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:33.447 21:39:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:33.447 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:33.447 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:33.447 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:33.447 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:33.447 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:33.447 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:33.447 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:33.447 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:33.447 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:33.447 1+0 records in 00:07:33.447 1+0 records out 00:07:33.447 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000528886 s, 7.7 MB/s 00:07:33.447 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:33.447 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:33.447 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:33.447 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:33.447 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:33.447 21:39:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:33.447 21:39:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:33.447 21:39:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:33.447 21:39:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:33.447 21:39:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:33.447 21:39:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:33.447 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:33.447 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:33.447 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:33.447 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:33.447 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:33.447 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:33.447 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:33.447 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:33.447 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:33.447 1+0 records in 00:07:33.447 1+0 records out 00:07:33.447 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111172 s, 3.7 MB/s 00:07:33.447 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:33.447 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:33.447 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:33.448 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:33.448 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:33.448 21:39:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:33.448 21:39:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:33.448 21:39:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:33.708 21:39:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:33.708 21:39:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:33.708 21:39:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:33.708 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:33.708 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:33.708 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:33.708 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:33.708 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:33.708 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:33.708 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:33.708 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:33.708 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:33.708 1+0 records in 00:07:33.708 1+0 records out 00:07:33.708 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000815069 s, 5.0 MB/s 00:07:33.708 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:33.708 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:33.708 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:33.708 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:33.708 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:33.708 21:39:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:33.708 21:39:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:33.708 21:39:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:33.969 21:39:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:33.969 21:39:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:33.969 21:39:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:33.969 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:07:33.969 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:33.969 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:33.969 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:33.969 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:07:33.969 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:33.969 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:33.969 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:33.969 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:33.969 1+0 records in 00:07:33.969 1+0 records out 00:07:33.969 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000490973 s, 8.3 MB/s 00:07:33.969 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:33.969 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:33.969 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:33.969 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:33.969 21:39:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:33.969 21:39:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:33.969 21:39:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:33.969 21:39:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:34.232 21:39:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:34.232 { 00:07:34.232 "nbd_device": "/dev/nbd0", 00:07:34.232 "bdev_name": "Nvme0n1" 00:07:34.232 }, 00:07:34.232 { 00:07:34.232 "nbd_device": "/dev/nbd1", 00:07:34.232 "bdev_name": "Nvme1n1p1" 00:07:34.232 }, 00:07:34.232 { 00:07:34.232 "nbd_device": "/dev/nbd2", 00:07:34.232 "bdev_name": "Nvme1n1p2" 00:07:34.232 }, 00:07:34.232 { 00:07:34.232 "nbd_device": "/dev/nbd3", 00:07:34.232 "bdev_name": "Nvme2n1" 00:07:34.232 }, 00:07:34.233 { 00:07:34.233 "nbd_device": "/dev/nbd4", 00:07:34.233 "bdev_name": "Nvme2n2" 00:07:34.233 }, 00:07:34.233 { 00:07:34.233 "nbd_device": "/dev/nbd5", 00:07:34.233 "bdev_name": "Nvme2n3" 00:07:34.233 }, 00:07:34.233 { 00:07:34.233 "nbd_device": "/dev/nbd6", 00:07:34.233 "bdev_name": "Nvme3n1" 00:07:34.233 } 00:07:34.233 ]' 00:07:34.233 21:39:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:34.233 21:39:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:34.233 { 00:07:34.233 "nbd_device": "/dev/nbd0", 00:07:34.233 "bdev_name": "Nvme0n1" 00:07:34.233 }, 00:07:34.233 { 00:07:34.233 "nbd_device": "/dev/nbd1", 00:07:34.233 "bdev_name": "Nvme1n1p1" 00:07:34.233 }, 00:07:34.233 { 00:07:34.233 "nbd_device": "/dev/nbd2", 00:07:34.233 "bdev_name": "Nvme1n1p2" 00:07:34.233 }, 00:07:34.233 { 00:07:34.233 "nbd_device": "/dev/nbd3", 00:07:34.233 "bdev_name": "Nvme2n1" 00:07:34.233 }, 00:07:34.233 { 00:07:34.233 "nbd_device": "/dev/nbd4", 00:07:34.233 "bdev_name": "Nvme2n2" 00:07:34.233 }, 00:07:34.233 { 00:07:34.233 "nbd_device": "/dev/nbd5", 00:07:34.233 "bdev_name": "Nvme2n3" 00:07:34.233 }, 00:07:34.233 { 00:07:34.233 "nbd_device": "/dev/nbd6", 00:07:34.233 "bdev_name": "Nvme3n1" 00:07:34.233 } 00:07:34.233 ]' 00:07:34.233 21:39:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:34.233 21:39:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:34.233 21:39:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:34.233 21:39:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:34.233 21:39:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:34.233 21:39:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:34.233 21:39:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:34.233 21:39:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:34.494 21:39:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:34.494 21:39:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:34.494 21:39:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:34.494 21:39:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:34.494 21:39:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:34.494 21:39:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:34.494 21:39:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:34.494 21:39:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:34.494 21:39:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:34.494 21:39:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:34.755 21:39:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:34.755 21:39:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:34.755 21:39:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:34.755 21:39:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:34.755 21:39:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:34.755 21:39:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:34.756 21:39:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:34.756 21:39:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:34.756 21:39:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:34.756 21:39:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:34.756 21:39:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:34.756 21:39:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:34.756 21:39:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:34.756 21:39:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:34.756 21:39:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:34.756 21:39:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:34.756 21:39:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:34.756 21:39:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:34.756 21:39:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:34.756 21:39:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:35.017 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:35.017 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:35.017 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:35.017 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:35.017 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:35.017 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:35.017 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:35.017 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:35.017 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:35.017 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:35.298 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:35.298 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:35.298 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:35.298 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:35.298 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:35.298 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:35.298 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:35.298 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:35.298 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:35.298 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:35.614 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:35.614 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:35.614 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:35.614 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:35.614 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:35.614 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:35.614 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:35.614 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:35.614 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:35.614 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:35.614 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:35.614 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:35.614 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:35.614 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:35.614 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:35.614 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:35.614 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:35.614 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:35.614 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:35.614 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:35.614 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:35.873 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:35.873 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:35.873 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:35.873 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:35.873 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:35.873 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:35.873 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:35.873 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:35.873 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:35.873 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:35.873 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:35.873 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:35.874 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:35.874 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:35.874 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:35.874 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:35.874 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:35.874 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:35.874 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:35.874 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:35.874 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:35.874 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:35.874 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:35.874 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:35.874 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:35.874 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:35.874 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:35.874 21:39:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:36.135 /dev/nbd0 00:07:36.135 21:39:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:36.135 21:39:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:36.135 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:36.135 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:36.135 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:36.135 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:36.135 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:36.135 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:36.135 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:36.135 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:36.135 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:36.135 1+0 records in 00:07:36.135 1+0 records out 00:07:36.135 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000853403 s, 4.8 MB/s 00:07:36.135 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.135 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:36.135 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.135 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:36.135 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:36.135 21:39:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:36.135 21:39:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:36.135 21:39:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:36.396 /dev/nbd1 00:07:36.396 21:39:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:36.396 21:39:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:36.396 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:36.396 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:36.396 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:36.396 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:36.396 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:36.396 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:36.396 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:36.396 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:36.396 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:36.396 1+0 records in 00:07:36.396 1+0 records out 00:07:36.396 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000871519 s, 4.7 MB/s 00:07:36.396 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.396 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:36.396 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.396 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:36.396 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:36.396 21:39:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:36.396 21:39:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:36.396 21:39:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:36.657 /dev/nbd10 00:07:36.657 21:39:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:36.657 21:39:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:36.657 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:36.657 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:36.657 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:36.657 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:36.657 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:36.657 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:36.657 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:36.657 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:36.658 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:36.658 1+0 records in 00:07:36.658 1+0 records out 00:07:36.658 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00119006 s, 3.4 MB/s 00:07:36.658 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.658 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:36.658 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.658 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:36.658 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:36.658 21:39:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:36.658 21:39:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:36.658 21:39:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:36.920 /dev/nbd11 00:07:36.920 21:39:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:36.920 21:39:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:36.920 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:36.920 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:36.920 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:36.920 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:36.920 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:36.920 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:36.920 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:36.920 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:36.920 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:36.920 1+0 records in 00:07:36.920 1+0 records out 00:07:36.920 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108943 s, 3.8 MB/s 00:07:36.920 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.920 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:36.920 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:36.920 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:36.920 21:39:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:36.920 21:39:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:36.920 21:39:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:36.920 21:39:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:37.180 /dev/nbd12 00:07:37.180 21:40:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:37.180 21:40:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:37.180 21:40:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:37.180 21:40:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:37.180 21:40:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:37.180 21:40:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:37.180 21:40:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:37.180 21:40:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:37.180 21:40:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:37.180 21:40:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:37.180 21:40:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:37.180 1+0 records in 00:07:37.180 1+0 records out 00:07:37.180 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0011223 s, 3.6 MB/s 00:07:37.180 21:40:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:37.180 21:40:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:37.180 21:40:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:37.180 21:40:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:37.180 21:40:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:37.180 21:40:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:37.180 21:40:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:37.180 21:40:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:37.180 /dev/nbd13 00:07:37.180 21:40:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:37.180 21:40:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:37.180 21:40:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:37.180 21:40:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:37.180 21:40:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:37.181 21:40:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:37.181 21:40:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:37.181 21:40:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:37.181 21:40:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:37.181 21:40:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:37.181 21:40:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:37.439 1+0 records in 00:07:37.439 1+0 records out 00:07:37.439 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0017929 s, 2.3 MB/s 00:07:37.439 21:40:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:37.439 21:40:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:37.439 21:40:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:37.439 21:40:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:37.439 21:40:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:37.439 21:40:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:37.439 21:40:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:37.439 21:40:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:37.439 /dev/nbd14 00:07:37.439 21:40:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:37.439 21:40:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:37.439 21:40:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:07:37.439 21:40:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:37.439 21:40:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:37.439 21:40:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:37.439 21:40:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:07:37.439 21:40:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:37.439 21:40:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:37.439 21:40:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:37.439 21:40:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:37.439 1+0 records in 00:07:37.439 1+0 records out 00:07:37.439 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031882 s, 12.8 MB/s 00:07:37.439 21:40:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:37.439 21:40:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:37.439 21:40:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:37.439 21:40:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:37.439 21:40:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:37.439 21:40:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:37.439 21:40:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:37.439 21:40:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:37.439 21:40:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:37.439 21:40:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:37.698 21:40:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:37.698 { 00:07:37.698 "nbd_device": "/dev/nbd0", 00:07:37.698 "bdev_name": "Nvme0n1" 00:07:37.698 }, 00:07:37.698 { 00:07:37.698 "nbd_device": "/dev/nbd1", 00:07:37.698 "bdev_name": "Nvme1n1p1" 00:07:37.698 }, 00:07:37.698 { 00:07:37.698 "nbd_device": "/dev/nbd10", 00:07:37.698 "bdev_name": "Nvme1n1p2" 00:07:37.698 }, 00:07:37.698 { 00:07:37.698 "nbd_device": "/dev/nbd11", 00:07:37.698 "bdev_name": "Nvme2n1" 00:07:37.698 }, 00:07:37.698 { 00:07:37.698 "nbd_device": "/dev/nbd12", 00:07:37.698 "bdev_name": "Nvme2n2" 00:07:37.698 }, 00:07:37.698 { 00:07:37.698 "nbd_device": "/dev/nbd13", 00:07:37.698 "bdev_name": "Nvme2n3" 00:07:37.698 }, 00:07:37.698 { 00:07:37.698 "nbd_device": "/dev/nbd14", 00:07:37.698 "bdev_name": "Nvme3n1" 00:07:37.698 } 00:07:37.698 ]' 00:07:37.698 21:40:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:37.698 { 00:07:37.698 "nbd_device": "/dev/nbd0", 00:07:37.698 "bdev_name": "Nvme0n1" 00:07:37.698 }, 00:07:37.698 { 00:07:37.698 "nbd_device": "/dev/nbd1", 00:07:37.698 "bdev_name": "Nvme1n1p1" 00:07:37.699 }, 00:07:37.699 { 00:07:37.699 "nbd_device": "/dev/nbd10", 00:07:37.699 "bdev_name": "Nvme1n1p2" 00:07:37.699 }, 00:07:37.699 { 00:07:37.699 "nbd_device": "/dev/nbd11", 00:07:37.699 "bdev_name": "Nvme2n1" 00:07:37.699 }, 00:07:37.699 { 00:07:37.699 "nbd_device": "/dev/nbd12", 00:07:37.699 "bdev_name": "Nvme2n2" 00:07:37.699 }, 00:07:37.699 { 00:07:37.699 "nbd_device": "/dev/nbd13", 00:07:37.699 "bdev_name": "Nvme2n3" 00:07:37.699 }, 00:07:37.699 { 00:07:37.699 "nbd_device": "/dev/nbd14", 00:07:37.699 "bdev_name": "Nvme3n1" 00:07:37.699 } 00:07:37.699 ]' 00:07:37.699 21:40:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:37.699 21:40:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:37.699 /dev/nbd1 00:07:37.699 /dev/nbd10 00:07:37.699 /dev/nbd11 00:07:37.699 /dev/nbd12 00:07:37.699 /dev/nbd13 00:07:37.699 /dev/nbd14' 00:07:37.699 21:40:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:37.699 /dev/nbd1 00:07:37.699 /dev/nbd10 00:07:37.699 /dev/nbd11 00:07:37.699 /dev/nbd12 00:07:37.699 /dev/nbd13 00:07:37.699 /dev/nbd14' 00:07:37.699 21:40:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:37.699 21:40:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:37.699 21:40:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:37.699 21:40:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:37.699 21:40:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:37.699 21:40:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:37.699 21:40:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:37.699 21:40:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:37.699 21:40:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:37.699 21:40:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:37.699 21:40:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:37.699 21:40:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:37.699 256+0 records in 00:07:37.699 256+0 records out 00:07:37.699 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00496958 s, 211 MB/s 00:07:37.699 21:40:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:37.699 21:40:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:37.957 256+0 records in 00:07:37.957 256+0 records out 00:07:37.957 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0531501 s, 19.7 MB/s 00:07:37.957 21:40:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:37.957 21:40:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:37.957 256+0 records in 00:07:37.957 256+0 records out 00:07:37.957 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0586486 s, 17.9 MB/s 00:07:37.957 21:40:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:37.957 21:40:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:37.957 256+0 records in 00:07:37.957 256+0 records out 00:07:37.957 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0547287 s, 19.2 MB/s 00:07:37.957 21:40:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:37.957 21:40:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:37.957 256+0 records in 00:07:37.957 256+0 records out 00:07:37.957 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0538588 s, 19.5 MB/s 00:07:37.957 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:37.957 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:38.216 256+0 records in 00:07:38.216 256+0 records out 00:07:38.216 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0531129 s, 19.7 MB/s 00:07:38.216 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:38.216 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:38.216 256+0 records in 00:07:38.216 256+0 records out 00:07:38.216 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0525407 s, 20.0 MB/s 00:07:38.216 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:38.216 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:38.216 256+0 records in 00:07:38.216 256+0 records out 00:07:38.216 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0554374 s, 18.9 MB/s 00:07:38.216 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:38.216 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:38.216 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:38.216 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:38.216 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:38.216 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:38.216 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:38.216 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:38.216 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:38.216 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:38.216 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:38.216 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:38.216 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:38.216 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:38.216 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:38.216 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:38.216 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:38.216 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:38.216 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:38.216 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:38.216 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:38.216 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:38.216 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:38.216 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:38.216 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:38.216 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:38.216 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:38.216 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:38.216 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:38.474 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:38.474 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:38.474 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:38.474 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:38.474 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:38.474 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:38.474 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:38.474 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:38.474 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:38.474 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:38.732 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:38.732 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:38.732 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:38.732 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:38.732 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:38.732 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:38.732 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:38.732 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:38.732 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:38.732 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:38.991 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:38.991 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:38.991 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:38.991 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:38.991 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:38.991 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:38.991 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:38.991 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:38.991 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:38.991 21:40:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:38.991 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:38.991 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:38.991 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:38.991 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:38.991 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:38.991 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:38.991 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:38.991 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:38.991 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:38.991 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:39.249 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:39.249 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:39.249 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:39.249 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:39.249 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:39.249 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:39.249 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:39.249 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:39.249 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:39.249 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:39.508 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:39.508 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:39.508 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:39.508 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:39.508 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:39.508 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:39.508 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:39.508 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:39.508 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:39.508 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:39.766 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:39.766 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:39.766 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:39.766 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:39.766 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:39.766 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:39.766 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:39.766 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:39.766 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:39.766 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.766 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:40.024 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:40.024 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:40.024 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:40.024 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:40.024 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:40.024 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:40.024 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:40.024 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:40.024 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:40.024 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:40.024 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:40.024 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:40.024 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:40.024 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:40.024 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:40.024 21:40:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:40.024 malloc_lvol_verify 00:07:40.024 21:40:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:40.281 b2f7c21d-d0a5-4a9c-8811-ef3e27126b20 00:07:40.281 21:40:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:40.539 ef352efe-6b1d-4e84-841c-0f1450ecc127 00:07:40.539 21:40:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:40.796 /dev/nbd0 00:07:40.796 21:40:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:40.796 21:40:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:40.796 21:40:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:40.796 21:40:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:40.796 21:40:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:40.796 mke2fs 1.47.0 (5-Feb-2023) 00:07:40.796 Discarding device blocks: 0/4096 done 00:07:40.796 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:40.796 00:07:40.796 Allocating group tables: 0/1 done 00:07:40.796 Writing inode tables: 0/1 done 00:07:40.796 Creating journal (1024 blocks): done 00:07:40.796 Writing superblocks and filesystem accounting information: 0/1 done 00:07:40.796 00:07:40.796 21:40:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:40.797 21:40:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:40.797 21:40:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:40.797 21:40:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:40.797 21:40:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:40.797 21:40:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:40.797 21:40:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:41.056 21:40:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:41.056 21:40:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:41.056 21:40:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:41.056 21:40:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:41.057 21:40:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:41.057 21:40:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:41.057 21:40:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:41.057 21:40:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:41.057 21:40:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72946 00:07:41.057 21:40:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72946 ']' 00:07:41.057 21:40:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72946 00:07:41.057 21:40:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:41.057 21:40:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:41.057 21:40:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72946 00:07:41.057 21:40:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:41.057 21:40:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:41.057 21:40:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72946' 00:07:41.057 killing process with pid 72946 00:07:41.057 21:40:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72946 00:07:41.057 21:40:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72946 00:07:41.057 21:40:04 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:41.057 00:07:41.057 real 0m9.599s 00:07:41.057 user 0m14.118s 00:07:41.057 sys 0m3.265s 00:07:41.057 ************************************ 00:07:41.057 END TEST bdev_nbd 00:07:41.057 ************************************ 00:07:41.057 21:40:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.057 21:40:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:41.057 21:40:04 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:07:41.057 21:40:04 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:07:41.057 skipping fio tests on NVMe due to multi-ns failures. 00:07:41.057 21:40:04 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:07:41.057 21:40:04 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:41.057 21:40:04 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:41.057 21:40:04 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:41.057 21:40:04 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:41.057 21:40:04 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.057 21:40:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:41.315 ************************************ 00:07:41.315 START TEST bdev_verify 00:07:41.315 ************************************ 00:07:41.315 21:40:04 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:41.315 [2024-11-27 21:40:04.243250] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:41.315 [2024-11-27 21:40:04.243363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73342 ] 00:07:41.315 [2024-11-27 21:40:04.386409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:41.315 [2024-11-27 21:40:04.403235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.315 [2024-11-27 21:40:04.403305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.885 Running I/O for 5 seconds... 00:07:44.206 19712.00 IOPS, 77.00 MiB/s [2024-11-27T21:40:08.269Z] 19968.00 IOPS, 78.00 MiB/s [2024-11-27T21:40:09.213Z] 20373.33 IOPS, 79.58 MiB/s [2024-11-27T21:40:10.151Z] 20240.00 IOPS, 79.06 MiB/s [2024-11-27T21:40:10.151Z] 20288.00 IOPS, 79.25 MiB/s 00:07:47.030 Latency(us) 00:07:47.030 [2024-11-27T21:40:10.151Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:47.030 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:47.030 Verification LBA range: start 0x0 length 0xbd0bd 00:07:47.030 Nvme0n1 : 5.08 1435.66 5.61 0.00 0.00 88887.48 20467.40 98404.82 00:07:47.030 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:47.030 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:47.030 Nvme0n1 : 5.05 1419.17 5.54 0.00 0.00 89864.80 19559.98 105664.20 00:07:47.030 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:47.030 Verification LBA range: start 0x0 length 0x4ff80 00:07:47.030 Nvme1n1p1 : 5.08 1435.06 5.61 0.00 0.00 88716.68 22887.19 87919.06 00:07:47.030 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:47.030 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:47.030 Nvme1n1p1 : 5.05 1418.75 5.54 0.00 0.00 89664.15 20769.87 94371.84 00:07:47.030 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:47.030 Verification LBA range: start 0x0 length 0x4ff7f 00:07:47.030 Nvme1n1p2 : 5.09 1434.20 5.60 0.00 0.00 88510.15 21979.77 76626.71 00:07:47.030 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:47.030 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:47.030 Nvme1n1p2 : 5.05 1418.31 5.54 0.00 0.00 89516.75 20265.75 84692.68 00:07:47.030 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:47.030 Verification LBA range: start 0x0 length 0x80000 00:07:47.030 Nvme2n1 : 5.09 1433.82 5.60 0.00 0.00 88337.36 22383.06 69770.63 00:07:47.030 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:47.030 Verification LBA range: start 0x80000 length 0x80000 00:07:47.030 Nvme2n1 : 5.08 1423.26 5.56 0.00 0.00 88957.10 7461.02 78643.20 00:07:47.030 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:47.030 Verification LBA range: start 0x0 length 0x80000 00:07:47.030 Nvme2n2 : 5.09 1433.46 5.60 0.00 0.00 88128.96 22383.06 72593.72 00:07:47.030 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:47.030 Verification LBA range: start 0x80000 length 0x80000 00:07:47.030 Nvme2n2 : 5.09 1432.53 5.60 0.00 0.00 88382.08 8771.74 71383.83 00:07:47.030 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:47.030 Verification LBA range: start 0x0 length 0x80000 00:07:47.030 Nvme2n3 : 5.09 1433.05 5.60 0.00 0.00 87933.48 17644.31 74206.92 00:07:47.030 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:47.030 Verification LBA range: start 0x80000 length 0x80000 00:07:47.030 Nvme2n3 : 5.09 1432.17 5.59 0.00 0.00 88160.16 9124.63 75013.51 00:07:47.030 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:47.030 Verification LBA range: start 0x0 length 0x20000 00:07:47.030 Nvme3n1 : 5.10 1443.81 5.64 0.00 0.00 87184.27 1852.65 74206.92 00:07:47.030 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:47.030 Verification LBA range: start 0x20000 length 0x20000 00:07:47.030 Nvme3n1 : 5.10 1431.78 5.59 0.00 0.00 87978.84 8620.50 78643.20 00:07:47.030 [2024-11-27T21:40:10.151Z] =================================================================================================================== 00:07:47.030 [2024-11-27T21:40:10.151Z] Total : 20025.02 78.22 0.00 0.00 88582.06 1852.65 105664.20 00:07:47.601 00:07:47.601 real 0m6.306s 00:07:47.601 user 0m11.958s 00:07:47.601 sys 0m0.187s 00:07:47.601 21:40:10 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.601 21:40:10 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:47.601 ************************************ 00:07:47.601 END TEST bdev_verify 00:07:47.601 ************************************ 00:07:47.601 21:40:10 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:47.601 21:40:10 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:47.601 21:40:10 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.601 21:40:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:47.601 ************************************ 00:07:47.601 START TEST bdev_verify_big_io 00:07:47.601 ************************************ 00:07:47.601 21:40:10 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:47.601 [2024-11-27 21:40:10.618205] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:47.601 [2024-11-27 21:40:10.618314] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73429 ] 00:07:47.862 [2024-11-27 21:40:10.763664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:47.862 [2024-11-27 21:40:10.783944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.862 [2024-11-27 21:40:10.783973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.123 Running I/O for 5 seconds... 00:07:54.386 1906.00 IOPS, 119.12 MiB/s [2024-11-27T21:40:17.766Z] 3613.00 IOPS, 225.81 MiB/s [2024-11-27T21:40:17.766Z] 3139.00 IOPS, 196.19 MiB/s 00:07:54.645 Latency(us) 00:07:54.645 [2024-11-27T21:40:17.766Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:54.645 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:54.645 Verification LBA range: start 0x0 length 0xbd0b 00:07:54.645 Nvme0n1 : 5.94 95.31 5.96 0.00 0.00 1276511.24 37305.11 1432516.14 00:07:54.645 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:54.645 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:54.645 Nvme0n1 : 5.78 94.24 5.89 0.00 0.00 1276404.85 57671.68 1322818.95 00:07:54.645 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:54.645 Verification LBA range: start 0x0 length 0x4ff8 00:07:54.645 Nvme1n1p1 : 5.94 94.46 5.90 0.00 0.00 1235490.46 86709.17 1329271.73 00:07:54.645 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:54.645 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:54.645 Nvme1n1p1 : 5.79 98.99 6.19 0.00 0.00 1203408.47 98808.12 1206669.00 00:07:54.645 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:54.645 Verification LBA range: start 0x0 length 0x4ff7 00:07:54.645 Nvme1n1p2 : 5.94 98.98 6.19 0.00 0.00 1155256.67 103244.41 1355082.83 00:07:54.645 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:54.645 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:54.645 Nvme1n1p2 : 6.00 102.27 6.39 0.00 0.00 1119230.08 103244.41 1122782.92 00:07:54.645 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:54.645 Verification LBA range: start 0x0 length 0x8000 00:07:54.645 Nvme2n1 : 6.01 95.86 5.99 0.00 0.00 1154596.93 65737.65 1677721.60 00:07:54.645 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:54.645 Verification LBA range: start 0x8000 length 0x8000 00:07:54.645 Nvme2n1 : 6.01 106.57 6.66 0.00 0.00 1053934.83 100824.62 1167952.34 00:07:54.645 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:54.645 Verification LBA range: start 0x0 length 0x8000 00:07:54.645 Nvme2n2 : 6.16 101.34 6.33 0.00 0.00 1043237.14 27424.30 1729343.80 00:07:54.645 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:54.645 Verification LBA range: start 0x8000 length 0x8000 00:07:54.645 Nvme2n2 : 6.12 108.42 6.78 0.00 0.00 996256.69 104857.60 1193763.45 00:07:54.645 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:54.645 Verification LBA range: start 0x0 length 0x8000 00:07:54.646 Nvme2n3 : 6.24 120.45 7.53 0.00 0.00 856470.72 10334.52 1780966.01 00:07:54.646 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:54.646 Verification LBA range: start 0x8000 length 0x8000 00:07:54.646 Nvme2n3 : 6.21 119.38 7.46 0.00 0.00 886637.38 45371.08 1213121.77 00:07:54.646 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:54.646 Verification LBA range: start 0x0 length 0x2000 00:07:54.646 Nvme3n1 : 6.32 186.68 11.67 0.00 0.00 537725.95 261.51 1522854.99 00:07:54.646 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:54.646 Verification LBA range: start 0x2000 length 0x2000 00:07:54.646 Nvme3n1 : 6.21 127.82 7.99 0.00 0.00 802116.43 1090.17 1251838.42 00:07:54.646 [2024-11-27T21:40:17.767Z] =================================================================================================================== 00:07:54.646 [2024-11-27T21:40:17.767Z] Total : 1550.78 96.92 0.00 0.00 997221.31 261.51 1780966.01 00:07:58.881 00:07:58.881 real 0m11.369s 00:07:58.881 user 0m21.990s 00:07:58.881 sys 0m0.248s 00:07:58.881 21:40:21 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.881 ************************************ 00:07:58.881 END TEST bdev_verify_big_io 00:07:58.881 ************************************ 00:07:58.881 21:40:21 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:58.881 21:40:21 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:58.881 21:40:21 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:58.881 21:40:21 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.881 21:40:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:58.881 ************************************ 00:07:58.881 START TEST bdev_write_zeroes 00:07:58.881 ************************************ 00:07:58.881 21:40:21 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:59.140 [2024-11-27 21:40:22.055936] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:59.140 [2024-11-27 21:40:22.056046] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73540 ] 00:07:59.140 [2024-11-27 21:40:22.198556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.140 [2024-11-27 21:40:22.219251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.710 Running I/O for 1 seconds... 00:08:00.647 64896.00 IOPS, 253.50 MiB/s 00:08:00.647 Latency(us) 00:08:00.647 [2024-11-27T21:40:23.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:00.647 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:00.647 Nvme0n1 : 1.03 9220.46 36.02 0.00 0.00 13853.18 6856.07 25508.63 00:08:00.647 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:00.647 Nvme1n1p1 : 1.03 9209.30 35.97 0.00 0.00 13846.72 10788.23 25206.15 00:08:00.648 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:00.648 Nvme1n1p2 : 1.03 9198.18 35.93 0.00 0.00 13822.07 10788.23 23996.26 00:08:00.648 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:00.648 Nvme2n1 : 1.03 9187.86 35.89 0.00 0.00 13792.50 9023.80 23592.96 00:08:00.648 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:00.648 Nvme2n2 : 1.03 9177.56 35.85 0.00 0.00 13787.78 8670.92 22988.01 00:08:00.648 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:00.648 Nvme2n3 : 1.03 9167.36 35.81 0.00 0.00 13766.37 7309.78 23996.26 00:08:00.648 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:00.648 Nvme3n1 : 1.03 9095.22 35.53 0.00 0.00 13853.07 10082.46 25508.63 00:08:00.648 [2024-11-27T21:40:23.769Z] =================================================================================================================== 00:08:00.648 [2024-11-27T21:40:23.769Z] Total : 64255.94 251.00 0.00 0.00 13817.35 6856.07 25508.63 00:08:00.909 00:08:00.909 real 0m1.813s 00:08:00.909 user 0m1.554s 00:08:00.909 sys 0m0.148s 00:08:00.909 ************************************ 00:08:00.909 END TEST bdev_write_zeroes 00:08:00.909 ************************************ 00:08:00.909 21:40:23 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.909 21:40:23 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:00.909 21:40:23 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:00.909 21:40:23 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:00.909 21:40:23 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.909 21:40:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:00.909 ************************************ 00:08:00.909 START TEST bdev_json_nonenclosed 00:08:00.909 ************************************ 00:08:00.909 21:40:23 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:00.909 [2024-11-27 21:40:23.927153] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:08:00.909 [2024-11-27 21:40:23.927388] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73577 ] 00:08:01.171 [2024-11-27 21:40:24.072470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.171 [2024-11-27 21:40:24.091400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.171 [2024-11-27 21:40:24.091611] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:01.171 [2024-11-27 21:40:24.091634] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:01.171 [2024-11-27 21:40:24.091645] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:01.171 00:08:01.171 real 0m0.282s 00:08:01.171 user 0m0.100s 00:08:01.171 sys 0m0.079s 00:08:01.171 21:40:24 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.171 ************************************ 00:08:01.171 END TEST bdev_json_nonenclosed 00:08:01.171 ************************************ 00:08:01.171 21:40:24 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:01.171 21:40:24 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:01.171 21:40:24 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:01.171 21:40:24 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.171 21:40:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:01.171 ************************************ 00:08:01.171 START TEST bdev_json_nonarray 00:08:01.171 ************************************ 00:08:01.171 21:40:24 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:01.171 [2024-11-27 21:40:24.264565] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:08:01.171 [2024-11-27 21:40:24.264674] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73602 ] 00:08:01.433 [2024-11-27 21:40:24.411061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.433 [2024-11-27 21:40:24.429872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.433 [2024-11-27 21:40:24.430092] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:01.433 [2024-11-27 21:40:24.430113] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:01.433 [2024-11-27 21:40:24.430129] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:01.433 ************************************ 00:08:01.433 END TEST bdev_json_nonarray 00:08:01.433 ************************************ 00:08:01.433 00:08:01.433 real 0m0.283s 00:08:01.433 user 0m0.110s 00:08:01.433 sys 0m0.070s 00:08:01.433 21:40:24 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.433 21:40:24 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:01.433 21:40:24 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:08:01.433 21:40:24 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:08:01.433 21:40:24 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:08:01.433 21:40:24 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:01.433 21:40:24 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.433 21:40:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:01.694 ************************************ 00:08:01.694 START TEST bdev_gpt_uuid 00:08:01.694 ************************************ 00:08:01.694 21:40:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:08:01.694 21:40:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:08:01.694 21:40:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:08:01.694 21:40:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73622 00:08:01.694 21:40:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:01.694 21:40:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 73622 00:08:01.694 21:40:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 73622 ']' 00:08:01.694 21:40:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.694 21:40:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:01.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.694 21:40:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.694 21:40:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:01.694 21:40:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:01.694 21:40:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:01.694 [2024-11-27 21:40:24.622080] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:08:01.695 [2024-11-27 21:40:24.622196] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73622 ] 00:08:01.695 [2024-11-27 21:40:24.762717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.695 [2024-11-27 21:40:24.782153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.725 21:40:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:02.725 21:40:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:08:02.726 21:40:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:02.726 21:40:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.726 21:40:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:02.726 Some configs were skipped because the RPC state that can call them passed over. 00:08:02.726 21:40:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.726 21:40:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:08:02.726 21:40:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.726 21:40:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:02.726 21:40:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.726 21:40:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:08:02.726 21:40:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.726 21:40:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:02.726 21:40:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.726 21:40:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:08:02.726 { 00:08:02.726 "name": "Nvme1n1p1", 00:08:02.726 "aliases": [ 00:08:02.726 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:08:02.726 ], 00:08:02.726 "product_name": "GPT Disk", 00:08:02.726 "block_size": 4096, 00:08:02.726 "num_blocks": 655104, 00:08:02.726 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:02.726 "assigned_rate_limits": { 00:08:02.726 "rw_ios_per_sec": 0, 00:08:02.726 "rw_mbytes_per_sec": 0, 00:08:02.726 "r_mbytes_per_sec": 0, 00:08:02.726 "w_mbytes_per_sec": 0 00:08:02.726 }, 00:08:02.726 "claimed": false, 00:08:02.726 "zoned": false, 00:08:02.726 "supported_io_types": { 00:08:02.726 "read": true, 00:08:02.726 "write": true, 00:08:02.726 "unmap": true, 00:08:02.726 "flush": true, 00:08:02.726 "reset": true, 00:08:02.726 "nvme_admin": false, 00:08:02.726 "nvme_io": false, 00:08:02.726 "nvme_io_md": false, 00:08:02.726 "write_zeroes": true, 00:08:02.726 "zcopy": false, 00:08:02.726 "get_zone_info": false, 00:08:02.726 "zone_management": false, 00:08:02.726 "zone_append": false, 00:08:02.726 "compare": true, 00:08:02.726 "compare_and_write": false, 00:08:02.726 "abort": true, 00:08:02.726 "seek_hole": false, 00:08:02.726 "seek_data": false, 00:08:02.726 "copy": true, 00:08:02.726 "nvme_iov_md": false 00:08:02.726 }, 00:08:02.726 "driver_specific": { 00:08:02.726 "gpt": { 00:08:02.726 "base_bdev": "Nvme1n1", 00:08:02.726 "offset_blocks": 256, 00:08:02.726 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:08:02.726 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:02.726 "partition_name": "SPDK_TEST_first" 00:08:02.726 } 00:08:02.726 } 00:08:02.726 } 00:08:02.726 ]' 00:08:02.726 21:40:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:08:02.726 21:40:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:08:02.987 21:40:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:08:02.987 21:40:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:02.987 21:40:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:02.987 21:40:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:02.987 21:40:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:02.987 21:40:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.987 21:40:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:02.987 21:40:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.987 21:40:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:08:02.987 { 00:08:02.987 "name": "Nvme1n1p2", 00:08:02.987 "aliases": [ 00:08:02.987 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:08:02.987 ], 00:08:02.987 "product_name": "GPT Disk", 00:08:02.987 "block_size": 4096, 00:08:02.987 "num_blocks": 655103, 00:08:02.987 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:02.987 "assigned_rate_limits": { 00:08:02.987 "rw_ios_per_sec": 0, 00:08:02.987 "rw_mbytes_per_sec": 0, 00:08:02.987 "r_mbytes_per_sec": 0, 00:08:02.987 "w_mbytes_per_sec": 0 00:08:02.987 }, 00:08:02.987 "claimed": false, 00:08:02.987 "zoned": false, 00:08:02.987 "supported_io_types": { 00:08:02.987 "read": true, 00:08:02.987 "write": true, 00:08:02.987 "unmap": true, 00:08:02.987 "flush": true, 00:08:02.987 "reset": true, 00:08:02.987 "nvme_admin": false, 00:08:02.987 "nvme_io": false, 00:08:02.987 "nvme_io_md": false, 00:08:02.987 "write_zeroes": true, 00:08:02.987 "zcopy": false, 00:08:02.987 "get_zone_info": false, 00:08:02.987 "zone_management": false, 00:08:02.987 "zone_append": false, 00:08:02.987 "compare": true, 00:08:02.987 "compare_and_write": false, 00:08:02.987 "abort": true, 00:08:02.987 "seek_hole": false, 00:08:02.987 "seek_data": false, 00:08:02.987 "copy": true, 00:08:02.987 "nvme_iov_md": false 00:08:02.987 }, 00:08:02.987 "driver_specific": { 00:08:02.987 "gpt": { 00:08:02.987 "base_bdev": "Nvme1n1", 00:08:02.987 "offset_blocks": 655360, 00:08:02.987 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:08:02.987 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:02.987 "partition_name": "SPDK_TEST_second" 00:08:02.987 } 00:08:02.987 } 00:08:02.987 } 00:08:02.987 ]' 00:08:02.987 21:40:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:08:02.987 21:40:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:08:02.987 21:40:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:08:02.987 21:40:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:02.988 21:40:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:02.988 21:40:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:02.988 21:40:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 73622 00:08:02.988 21:40:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 73622 ']' 00:08:02.988 21:40:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 73622 00:08:02.988 21:40:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:08:02.988 21:40:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:02.988 21:40:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73622 00:08:02.988 21:40:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:02.988 killing process with pid 73622 00:08:02.988 21:40:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:02.988 21:40:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73622' 00:08:02.988 21:40:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 73622 00:08:02.988 21:40:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 73622 00:08:03.249 00:08:03.249 real 0m1.744s 00:08:03.249 user 0m1.936s 00:08:03.249 sys 0m0.315s 00:08:03.249 21:40:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.249 ************************************ 00:08:03.249 END TEST bdev_gpt_uuid 00:08:03.249 ************************************ 00:08:03.249 21:40:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:03.249 21:40:26 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:08:03.249 21:40:26 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:08:03.249 21:40:26 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:08:03.249 21:40:26 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:03.249 21:40:26 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:03.249 21:40:26 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:08:03.249 21:40:26 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:08:03.249 21:40:26 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:08:03.249 21:40:26 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:03.822 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:03.822 Waiting for block devices as requested 00:08:03.822 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:03.822 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:04.083 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:04.083 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:09.372 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:09.372 21:40:32 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:08:09.372 21:40:32 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:08:09.372 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:09.372 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:09.372 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:09.372 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:09.372 21:40:32 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:08:09.372 00:08:09.372 real 0m52.307s 00:08:09.372 user 1m11.818s 00:08:09.372 sys 0m7.115s 00:08:09.372 21:40:32 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.372 ************************************ 00:08:09.372 END TEST blockdev_nvme_gpt 00:08:09.372 ************************************ 00:08:09.372 21:40:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:09.633 21:40:32 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:09.633 21:40:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:09.633 21:40:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.633 21:40:32 -- common/autotest_common.sh@10 -- # set +x 00:08:09.633 ************************************ 00:08:09.633 START TEST nvme 00:08:09.633 ************************************ 00:08:09.633 21:40:32 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:09.633 * Looking for test storage... 00:08:09.633 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:09.633 21:40:32 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:09.633 21:40:32 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:08:09.633 21:40:32 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:09.633 21:40:32 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:09.633 21:40:32 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:09.633 21:40:32 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:09.633 21:40:32 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:09.633 21:40:32 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:09.633 21:40:32 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:09.633 21:40:32 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:09.633 21:40:32 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:09.633 21:40:32 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:09.633 21:40:32 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:09.633 21:40:32 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:09.633 21:40:32 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:09.633 21:40:32 nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:09.633 21:40:32 nvme -- scripts/common.sh@345 -- # : 1 00:08:09.633 21:40:32 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:09.633 21:40:32 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:09.633 21:40:32 nvme -- scripts/common.sh@365 -- # decimal 1 00:08:09.633 21:40:32 nvme -- scripts/common.sh@353 -- # local d=1 00:08:09.633 21:40:32 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:09.633 21:40:32 nvme -- scripts/common.sh@355 -- # echo 1 00:08:09.633 21:40:32 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:09.633 21:40:32 nvme -- scripts/common.sh@366 -- # decimal 2 00:08:09.633 21:40:32 nvme -- scripts/common.sh@353 -- # local d=2 00:08:09.633 21:40:32 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:09.633 21:40:32 nvme -- scripts/common.sh@355 -- # echo 2 00:08:09.633 21:40:32 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:09.633 21:40:32 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:09.633 21:40:32 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:09.633 21:40:32 nvme -- scripts/common.sh@368 -- # return 0 00:08:09.633 21:40:32 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:09.633 21:40:32 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:09.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.633 --rc genhtml_branch_coverage=1 00:08:09.633 --rc genhtml_function_coverage=1 00:08:09.633 --rc genhtml_legend=1 00:08:09.633 --rc geninfo_all_blocks=1 00:08:09.633 --rc geninfo_unexecuted_blocks=1 00:08:09.633 00:08:09.633 ' 00:08:09.633 21:40:32 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:09.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.633 --rc genhtml_branch_coverage=1 00:08:09.633 --rc genhtml_function_coverage=1 00:08:09.633 --rc genhtml_legend=1 00:08:09.633 --rc geninfo_all_blocks=1 00:08:09.633 --rc geninfo_unexecuted_blocks=1 00:08:09.633 00:08:09.633 ' 00:08:09.633 21:40:32 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:09.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.633 --rc genhtml_branch_coverage=1 00:08:09.633 --rc genhtml_function_coverage=1 00:08:09.633 --rc genhtml_legend=1 00:08:09.633 --rc geninfo_all_blocks=1 00:08:09.633 --rc geninfo_unexecuted_blocks=1 00:08:09.633 00:08:09.633 ' 00:08:09.633 21:40:32 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:09.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.633 --rc genhtml_branch_coverage=1 00:08:09.633 --rc genhtml_function_coverage=1 00:08:09.633 --rc genhtml_legend=1 00:08:09.633 --rc geninfo_all_blocks=1 00:08:09.633 --rc geninfo_unexecuted_blocks=1 00:08:09.633 00:08:09.633 ' 00:08:09.634 21:40:32 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:10.205 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:10.776 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:10.776 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:10.776 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:10.776 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:10.776 21:40:33 nvme -- nvme/nvme.sh@79 -- # uname 00:08:10.776 21:40:33 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:10.776 21:40:33 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:10.776 21:40:33 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:10.776 21:40:33 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:10.776 21:40:33 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:08:10.776 21:40:33 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:08:10.776 Waiting for stub to ready for secondary processes... 00:08:10.776 21:40:33 nvme -- common/autotest_common.sh@1075 -- # stubpid=74245 00:08:10.776 21:40:33 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:08:10.776 21:40:33 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:10.776 21:40:33 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:10.776 21:40:33 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/74245 ]] 00:08:10.776 21:40:33 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:08:10.776 [2024-11-27 21:40:33.798467] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:08:10.776 [2024-11-27 21:40:33.798577] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:08:11.722 [2024-11-27 21:40:34.522041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:11.722 [2024-11-27 21:40:34.534405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:11.722 [2024-11-27 21:40:34.534524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:11.722 [2024-11-27 21:40:34.534586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.722 [2024-11-27 21:40:34.545936] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:08:11.722 [2024-11-27 21:40:34.545968] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:11.722 [2024-11-27 21:40:34.554885] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:11.722 [2024-11-27 21:40:34.555075] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:11.722 [2024-11-27 21:40:34.556434] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:11.722 [2024-11-27 21:40:34.556633] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:11.722 [2024-11-27 21:40:34.556701] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:11.722 [2024-11-27 21:40:34.557980] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:11.722 [2024-11-27 21:40:34.558262] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:11.722 [2024-11-27 21:40:34.558319] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:11.722 [2024-11-27 21:40:34.559231] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:11.722 [2024-11-27 21:40:34.559425] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:11.722 [2024-11-27 21:40:34.559474] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:11.722 [2024-11-27 21:40:34.559506] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:11.722 [2024-11-27 21:40:34.559534] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:11.722 21:40:34 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:11.722 done. 00:08:11.722 21:40:34 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:08:11.722 21:40:34 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:11.722 21:40:34 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:08:11.722 21:40:34 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.722 21:40:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:11.722 ************************************ 00:08:11.722 START TEST nvme_reset 00:08:11.722 ************************************ 00:08:11.722 21:40:34 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:11.983 Initializing NVMe Controllers 00:08:11.983 Skipping QEMU NVMe SSD at 0000:00:10.0 00:08:11.983 Skipping QEMU NVMe SSD at 0000:00:13.0 00:08:11.983 Skipping QEMU NVMe SSD at 0000:00:11.0 00:08:11.983 Skipping QEMU NVMe SSD at 0000:00:12.0 00:08:11.983 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:11.983 00:08:11.983 real 0m0.183s 00:08:11.983 user 0m0.053s 00:08:11.983 sys 0m0.082s 00:08:11.983 21:40:34 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.983 ************************************ 00:08:11.983 END TEST nvme_reset 00:08:11.983 ************************************ 00:08:11.983 21:40:34 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:08:11.983 21:40:35 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:11.983 21:40:35 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:11.983 21:40:35 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.983 21:40:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:11.983 ************************************ 00:08:11.983 START TEST nvme_identify 00:08:11.983 ************************************ 00:08:11.983 21:40:35 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:08:11.983 21:40:35 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:11.983 21:40:35 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:11.983 21:40:35 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:11.983 21:40:35 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:11.983 21:40:35 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:11.983 21:40:35 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:08:11.983 21:40:35 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:11.983 21:40:35 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:11.983 21:40:35 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:12.247 21:40:35 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:12.247 21:40:35 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:12.247 21:40:35 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:12.247 [2024-11-27 21:40:35.250443] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 74266 terminated unexpected 00:08:12.247 ===================================================== 00:08:12.247 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:12.247 ===================================================== 00:08:12.247 Controller Capabilities/Features 00:08:12.247 ================================ 00:08:12.247 Vendor ID: 1b36 00:08:12.247 Subsystem Vendor ID: 1af4 00:08:12.247 Serial Number: 12340 00:08:12.247 Model Number: QEMU NVMe Ctrl 00:08:12.247 Firmware Version: 8.0.0 00:08:12.247 Recommended Arb Burst: 6 00:08:12.247 IEEE OUI Identifier: 00 54 52 00:08:12.247 Multi-path I/O 00:08:12.247 May have multiple subsystem ports: No 00:08:12.247 May have multiple controllers: No 00:08:12.247 Associated with SR-IOV VF: No 00:08:12.247 Max Data Transfer Size: 524288 00:08:12.247 Max Number of Namespaces: 256 00:08:12.247 Max Number of I/O Queues: 64 00:08:12.247 NVMe Specification Version (VS): 1.4 00:08:12.247 NVMe Specification Version (Identify): 1.4 00:08:12.247 Maximum Queue Entries: 2048 00:08:12.247 Contiguous Queues Required: Yes 00:08:12.247 Arbitration Mechanisms Supported 00:08:12.247 Weighted Round Robin: Not Supported 00:08:12.247 Vendor Specific: Not Supported 00:08:12.247 Reset Timeout: 7500 ms 00:08:12.247 Doorbell Stride: 4 bytes 00:08:12.247 NVM Subsystem Reset: Not Supported 00:08:12.247 Command Sets Supported 00:08:12.247 NVM Command Set: Supported 00:08:12.247 Boot Partition: Not Supported 00:08:12.247 Memory Page Size Minimum: 4096 bytes 00:08:12.247 Memory Page Size Maximum: 65536 bytes 00:08:12.247 Persistent Memory Region: Not Supported 00:08:12.247 Optional Asynchronous Events Supported 00:08:12.247 Namespace Attribute Notices: Supported 00:08:12.247 Firmware Activation Notices: Not Supported 00:08:12.247 ANA Change Notices: Not Supported 00:08:12.247 PLE Aggregate Log Change Notices: Not Supported 00:08:12.247 LBA Status Info Alert Notices: Not Supported 00:08:12.247 EGE Aggregate Log Change Notices: Not Supported 00:08:12.247 Normal NVM Subsystem Shutdown event: Not Supported 00:08:12.247 Zone Descriptor Change Notices: Not Supported 00:08:12.247 Discovery Log Change Notices: Not Supported 00:08:12.247 Controller Attributes 00:08:12.247 128-bit Host Identifier: Not Supported 00:08:12.247 Non-Operational Permissive Mode: Not Supported 00:08:12.247 NVM Sets: Not Supported 00:08:12.247 Read Recovery Levels: Not Supported 00:08:12.247 Endurance Groups: Not Supported 00:08:12.247 Predictable Latency Mode: Not Supported 00:08:12.247 Traffic Based Keep ALive: Not Supported 00:08:12.247 Namespace Granularity: Not Supported 00:08:12.247 SQ Associations: Not Supported 00:08:12.247 UUID List: Not Supported 00:08:12.247 Multi-Domain Subsystem: Not Supported 00:08:12.247 Fixed Capacity Management: Not Supported 00:08:12.247 Variable Capacity Management: Not Supported 00:08:12.247 Delete Endurance Group: Not Supported 00:08:12.247 Delete NVM Set: Not Supported 00:08:12.247 Extended LBA Formats Supported: Supported 00:08:12.247 Flexible Data Placement Supported: Not Supported 00:08:12.248 00:08:12.248 Controller Memory Buffer Support 00:08:12.248 ================================ 00:08:12.248 Supported: No 00:08:12.248 00:08:12.248 Persistent Memory Region Support 00:08:12.248 ================================ 00:08:12.248 Supported: No 00:08:12.248 00:08:12.248 Admin Command Set Attributes 00:08:12.248 ============================ 00:08:12.248 Security Send/Receive: Not Supported 00:08:12.248 Format NVM: Supported 00:08:12.248 Firmware Activate/Download: Not Supported 00:08:12.248 Namespace Management: Supported 00:08:12.248 Device Self-Test: Not Supported 00:08:12.248 Directives: Supported 00:08:12.248 NVMe-MI: Not Supported 00:08:12.248 Virtualization Management: Not Supported 00:08:12.248 Doorbell Buffer Config: Supported 00:08:12.248 Get LBA Status Capability: Not Supported 00:08:12.248 Command & Feature Lockdown Capability: Not Supported 00:08:12.248 Abort Command Limit: 4 00:08:12.248 Async Event Request Limit: 4 00:08:12.248 Number of Firmware Slots: N/A 00:08:12.248 Firmware Slot 1 Read-Only: N/A 00:08:12.248 Firmware Activation Without Reset: N/A 00:08:12.248 Multiple Update Detection Support: N/A 00:08:12.248 Firmware Update Granularity: No Information Provided 00:08:12.248 Per-Namespace SMART Log: Yes 00:08:12.248 Asymmetric Namespace Access Log Page: Not Supported 00:08:12.248 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:12.248 Command Effects Log Page: Supported 00:08:12.248 Get Log Page Extended Data: Supported 00:08:12.248 Telemetry Log Pages: Not Supported 00:08:12.248 Persistent Event Log Pages: Not Supported 00:08:12.248 Supported Log Pages Log Page: May Support 00:08:12.248 Commands Supported & Effects Log Page: Not Supported 00:08:12.248 Feature Identifiers & Effects Log Page:May Support 00:08:12.248 NVMe-MI Commands & Effects Log Page: May Support 00:08:12.248 Data Area 4 for Telemetry Log: Not Supported 00:08:12.248 Error Log Page Entries Supported: 1 00:08:12.248 Keep Alive: Not Supported 00:08:12.248 00:08:12.248 NVM Command Set Attributes 00:08:12.248 ========================== 00:08:12.248 Submission Queue Entry Size 00:08:12.248 Max: 64 00:08:12.248 Min: 64 00:08:12.248 Completion Queue Entry Size 00:08:12.248 Max: 16 00:08:12.248 Min: 16 00:08:12.248 Number of Namespaces: 256 00:08:12.248 Compare Command: Supported 00:08:12.248 Write Uncorrectable Command: Not Supported 00:08:12.248 Dataset Management Command: Supported 00:08:12.248 Write Zeroes Command: Supported 00:08:12.248 Set Features Save Field: Supported 00:08:12.248 Reservations: Not Supported 00:08:12.248 Timestamp: Supported 00:08:12.248 Copy: Supported 00:08:12.248 Volatile Write Cache: Present 00:08:12.248 Atomic Write Unit (Normal): 1 00:08:12.248 Atomic Write Unit (PFail): 1 00:08:12.248 Atomic Compare & Write Unit: 1 00:08:12.248 Fused Compare & Write: Not Supported 00:08:12.248 Scatter-Gather List 00:08:12.248 SGL Command Set: Supported 00:08:12.248 SGL Keyed: Not Supported 00:08:12.248 SGL Bit Bucket Descriptor: Not Supported 00:08:12.248 SGL Metadata Pointer: Not Supported 00:08:12.248 Oversized SGL: Not Supported 00:08:12.248 SGL Metadata Address: Not Supported 00:08:12.248 SGL Offset: Not Supported 00:08:12.248 Transport SGL Data Block: Not Supported 00:08:12.248 Replay Protected Memory Block: Not Supported 00:08:12.248 00:08:12.248 Firmware Slot Information 00:08:12.248 ========================= 00:08:12.248 Active slot: 1 00:08:12.248 Slot 1 Firmware Revision: 1.0 00:08:12.248 00:08:12.248 00:08:12.248 Commands Supported and Effects 00:08:12.248 ============================== 00:08:12.248 Admin Commands 00:08:12.248 -------------- 00:08:12.248 Delete I/O Submission Queue (00h): Supported 00:08:12.248 Create I/O Submission Queue (01h): Supported 00:08:12.248 Get Log Page (02h): Supported 00:08:12.248 Delete I/O Completion Queue (04h): Supported 00:08:12.248 Create I/O Completion Queue (05h): Supported 00:08:12.248 Identify (06h): Supported 00:08:12.248 Abort (08h): Supported 00:08:12.248 Set Features (09h): Supported 00:08:12.248 Get Features (0Ah): Supported 00:08:12.248 Asynchronous Event Request (0Ch): Supported 00:08:12.248 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:12.248 Directive Send (19h): Supported 00:08:12.248 Directive Receive (1Ah): Supported 00:08:12.248 Virtualization Management (1Ch): Supported 00:08:12.248 Doorbell Buffer Config (7Ch): Supported 00:08:12.248 Format NVM (80h): Supported LBA-Change 00:08:12.248 I/O Commands 00:08:12.248 ------------ 00:08:12.248 Flush (00h): Supported LBA-Change 00:08:12.248 Write (01h): Supported LBA-Change 00:08:12.248 Read (02h): Supported 00:08:12.248 Compare (05h): Supported 00:08:12.248 Write Zeroes (08h): Supported LBA-Change 00:08:12.248 Dataset Management (09h): Supported LBA-Change 00:08:12.248 Unknown (0Ch): Supported 00:08:12.248 Unknown (12h): Supported 00:08:12.248 Copy (19h): Supported LBA-Change 00:08:12.248 Unknown (1Dh): Supported LBA-Change 00:08:12.248 00:08:12.248 Error Log 00:08:12.248 ========= 00:08:12.248 00:08:12.248 Arbitration 00:08:12.248 =========== 00:08:12.248 Arbitration Burst: no limit 00:08:12.248 00:08:12.248 Power Management 00:08:12.248 ================ 00:08:12.248 Number of Power States: 1 00:08:12.248 Current Power State: Power State #0 00:08:12.248 Power State #0: 00:08:12.248 Max Power: 25.00 W 00:08:12.248 Non-Operational State: Operational 00:08:12.248 Entry Latency: 16 microseconds 00:08:12.248 Exit Latency: 4 microseconds 00:08:12.248 Relative Read Throughput: 0 00:08:12.248 Relative Read Latency: 0 00:08:12.248 Relative Write Throughput: 0 00:08:12.248 Relative Write Latency: 0 00:08:12.248 Idle Power[2024-11-27 21:40:35.252382] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 74266 terminated unexpected 00:08:12.248 : Not Reported 00:08:12.248 Active Power: Not Reported 00:08:12.248 Non-Operational Permissive Mode: Not Supported 00:08:12.248 00:08:12.248 Health Information 00:08:12.248 ================== 00:08:12.248 Critical Warnings: 00:08:12.248 Available Spare Space: OK 00:08:12.248 Temperature: OK 00:08:12.248 Device Reliability: OK 00:08:12.248 Read Only: No 00:08:12.248 Volatile Memory Backup: OK 00:08:12.248 Current Temperature: 323 Kelvin (50 Celsius) 00:08:12.248 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:12.248 Available Spare: 0% 00:08:12.248 Available Spare Threshold: 0% 00:08:12.248 Life Percentage Used: 0% 00:08:12.248 Data Units Read: 642 00:08:12.248 Data Units Written: 570 00:08:12.248 Host Read Commands: 34791 00:08:12.248 Host Write Commands: 34577 00:08:12.248 Controller Busy Time: 0 minutes 00:08:12.248 Power Cycles: 0 00:08:12.248 Power On Hours: 0 hours 00:08:12.248 Unsafe Shutdowns: 0 00:08:12.248 Unrecoverable Media Errors: 0 00:08:12.248 Lifetime Error Log Entries: 0 00:08:12.248 Warning Temperature Time: 0 minutes 00:08:12.248 Critical Temperature Time: 0 minutes 00:08:12.248 00:08:12.248 Number of Queues 00:08:12.248 ================ 00:08:12.248 Number of I/O Submission Queues: 64 00:08:12.248 Number of I/O Completion Queues: 64 00:08:12.248 00:08:12.248 ZNS Specific Controller Data 00:08:12.248 ============================ 00:08:12.248 Zone Append Size Limit: 0 00:08:12.248 00:08:12.248 00:08:12.248 Active Namespaces 00:08:12.248 ================= 00:08:12.248 Namespace ID:1 00:08:12.248 Error Recovery Timeout: Unlimited 00:08:12.248 Command Set Identifier: NVM (00h) 00:08:12.248 Deallocate: Supported 00:08:12.248 Deallocated/Unwritten Error: Supported 00:08:12.248 Deallocated Read Value: All 0x00 00:08:12.248 Deallocate in Write Zeroes: Not Supported 00:08:12.248 Deallocated Guard Field: 0xFFFF 00:08:12.248 Flush: Supported 00:08:12.248 Reservation: Not Supported 00:08:12.248 Metadata Transferred as: Separate Metadata Buffer 00:08:12.248 Namespace Sharing Capabilities: Private 00:08:12.248 Size (in LBAs): 1548666 (5GiB) 00:08:12.248 Capacity (in LBAs): 1548666 (5GiB) 00:08:12.248 Utilization (in LBAs): 1548666 (5GiB) 00:08:12.248 Thin Provisioning: Not Supported 00:08:12.248 Per-NS Atomic Units: No 00:08:12.248 Maximum Single Source Range Length: 128 00:08:12.248 Maximum Copy Length: 128 00:08:12.248 Maximum Source Range Count: 128 00:08:12.248 NGUID/EUI64 Never Reused: No 00:08:12.248 Namespace Write Protected: No 00:08:12.248 Number of LBA Formats: 8 00:08:12.248 Current LBA Format: LBA Format #07 00:08:12.249 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:12.249 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:12.249 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:12.249 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:12.249 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:12.249 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:12.249 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:12.249 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:12.249 00:08:12.249 NVM Specific Namespace Data 00:08:12.249 =========================== 00:08:12.249 Logical Block Storage Tag Mask: 0 00:08:12.249 Protection Information Capabilities: 00:08:12.249 16b Guard Protection Information Storage Tag Support: No 00:08:12.249 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:12.249 Storage Tag Check Read Support: No 00:08:12.249 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.249 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.249 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.249 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.249 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.249 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.249 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.249 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.249 ===================================================== 00:08:12.249 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:12.249 ===================================================== 00:08:12.249 Controller Capabilities/Features 00:08:12.249 ================================ 00:08:12.249 Vendor ID: 1b36 00:08:12.249 Subsystem Vendor ID: 1af4 00:08:12.249 Serial Number: 12343 00:08:12.249 Model Number: QEMU NVMe Ctrl 00:08:12.249 Firmware Version: 8.0.0 00:08:12.249 Recommended Arb Burst: 6 00:08:12.249 IEEE OUI Identifier: 00 54 52 00:08:12.249 Multi-path I/O 00:08:12.249 May have multiple subsystem ports: No 00:08:12.249 May have multiple controllers: Yes 00:08:12.249 Associated with SR-IOV VF: No 00:08:12.249 Max Data Transfer Size: 524288 00:08:12.249 Max Number of Namespaces: 256 00:08:12.249 Max Number of I/O Queues: 64 00:08:12.249 NVMe Specification Version (VS): 1.4 00:08:12.249 NVMe Specification Version (Identify): 1.4 00:08:12.249 Maximum Queue Entries: 2048 00:08:12.249 Contiguous Queues Required: Yes 00:08:12.249 Arbitration Mechanisms Supported 00:08:12.249 Weighted Round Robin: Not Supported 00:08:12.249 Vendor Specific: Not Supported 00:08:12.249 Reset Timeout: 7500 ms 00:08:12.249 Doorbell Stride: 4 bytes 00:08:12.249 NVM Subsystem Reset: Not Supported 00:08:12.249 Command Sets Supported 00:08:12.249 NVM Command Set: Supported 00:08:12.249 Boot Partition: Not Supported 00:08:12.249 Memory Page Size Minimum: 4096 bytes 00:08:12.249 Memory Page Size Maximum: 65536 bytes 00:08:12.249 Persistent Memory Region: Not Supported 00:08:12.249 Optional Asynchronous Events Supported 00:08:12.249 Namespace Attribute Notices: Supported 00:08:12.249 Firmware Activation Notices: Not Supported 00:08:12.249 ANA Change Notices: Not Supported 00:08:12.249 PLE Aggregate Log Change Notices: Not Supported 00:08:12.249 LBA Status Info Alert Notices: Not Supported 00:08:12.249 EGE Aggregate Log Change Notices: Not Supported 00:08:12.249 Normal NVM Subsystem Shutdown event: Not Supported 00:08:12.249 Zone Descriptor Change Notices: Not Supported 00:08:12.249 Discovery Log Change Notices: Not Supported 00:08:12.249 Controller Attributes 00:08:12.249 128-bit Host Identifier: Not Supported 00:08:12.249 Non-Operational Permissive Mode: Not Supported 00:08:12.249 NVM Sets: Not Supported 00:08:12.249 Read Recovery Levels: Not Supported 00:08:12.249 Endurance Groups: Supported 00:08:12.249 Predictable Latency Mode: Not Supported 00:08:12.249 Traffic Based Keep ALive: Not Supported 00:08:12.249 Namespace Granularity: Not Supported 00:08:12.249 SQ Associations: Not Supported 00:08:12.249 UUID List: Not Supported 00:08:12.249 Multi-Domain Subsystem: Not Supported 00:08:12.249 Fixed Capacity Management: Not Supported 00:08:12.249 Variable Capacity Management: Not Supported 00:08:12.249 Delete Endurance Group: Not Supported 00:08:12.249 Delete NVM Set: Not Supported 00:08:12.249 Extended LBA Formats Supported: Supported 00:08:12.249 Flexible Data Placement Supported: Supported 00:08:12.249 00:08:12.249 Controller Memory Buffer Support 00:08:12.249 ================================ 00:08:12.249 Supported: No 00:08:12.249 00:08:12.249 Persistent Memory Region Support 00:08:12.249 ================================ 00:08:12.249 Supported: No 00:08:12.249 00:08:12.249 Admin Command Set Attributes 00:08:12.249 ============================ 00:08:12.249 Security Send/Receive: Not Supported 00:08:12.249 Format NVM: Supported 00:08:12.249 Firmware Activate/Download: Not Supported 00:08:12.249 Namespace Management: Supported 00:08:12.249 Device Self-Test: Not Supported 00:08:12.249 Directives: Supported 00:08:12.249 NVMe-MI: Not Supported 00:08:12.249 Virtualization Management: Not Supported 00:08:12.249 Doorbell Buffer Config: Supported 00:08:12.249 Get LBA Status Capability: Not Supported 00:08:12.249 Command & Feature Lockdown Capability: Not Supported 00:08:12.249 Abort Command Limit: 4 00:08:12.249 Async Event Request Limit: 4 00:08:12.249 Number of Firmware Slots: N/A 00:08:12.249 Firmware Slot 1 Read-Only: N/A 00:08:12.249 Firmware Activation Without Reset: N/A 00:08:12.249 Multiple Update Detection Support: N/A 00:08:12.249 Firmware Update Granularity: No Information Provided 00:08:12.249 Per-Namespace SMART Log: Yes 00:08:12.249 Asymmetric Namespace Access Log Page: Not Supported 00:08:12.249 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:12.249 Command Effects Log Page: Supported 00:08:12.249 Get Log Page Extended Data: Supported 00:08:12.249 Telemetry Log Pages: Not Supported 00:08:12.249 Persistent Event Log Pages: Not Supported 00:08:12.249 Supported Log Pages Log Page: May Support 00:08:12.249 Commands Supported & Effects Log Page: Not Supported 00:08:12.249 Feature Identifiers & Effects Log Page:May Support 00:08:12.249 NVMe-MI Commands & Effects Log Page: May Support 00:08:12.249 Data Area 4 for Telemetry Log: Not Supported 00:08:12.249 Error Log Page Entries Supported: 1 00:08:12.249 Keep Alive: Not Supported 00:08:12.249 00:08:12.249 NVM Command Set Attributes 00:08:12.249 ========================== 00:08:12.249 Submission Queue Entry Size 00:08:12.249 Max: 64 00:08:12.249 Min: 64 00:08:12.249 Completion Queue Entry Size 00:08:12.249 Max: 16 00:08:12.249 Min: 16 00:08:12.249 Number of Namespaces: 256 00:08:12.249 Compare Command: Supported 00:08:12.249 Write Uncorrectable Command: Not Supported 00:08:12.249 Dataset Management Command: Supported 00:08:12.249 Write Zeroes Command: Supported 00:08:12.249 Set Features Save Field: Supported 00:08:12.249 Reservations: Not Supported 00:08:12.249 Timestamp: Supported 00:08:12.249 Copy: Supported 00:08:12.249 Volatile Write Cache: Present 00:08:12.249 Atomic Write Unit (Normal): 1 00:08:12.249 Atomic Write Unit (PFail): 1 00:08:12.249 Atomic Compare & Write Unit: 1 00:08:12.249 Fused Compare & Write: Not Supported 00:08:12.249 Scatter-Gather List 00:08:12.249 SGL Command Set: Supported 00:08:12.249 SGL Keyed: Not Supported 00:08:12.249 SGL Bit Bucket Descriptor: Not Supported 00:08:12.249 SGL Metadata Pointer: Not Supported 00:08:12.249 Oversized SGL: Not Supported 00:08:12.249 SGL Metadata Address: Not Supported 00:08:12.249 SGL Offset: Not Supported 00:08:12.249 Transport SGL Data Block: Not Supported 00:08:12.249 Replay Protected Memory Block: Not Supported 00:08:12.249 00:08:12.249 Firmware Slot Information 00:08:12.249 ========================= 00:08:12.249 Active slot: 1 00:08:12.249 Slot 1 Firmware Revision: 1.0 00:08:12.249 00:08:12.249 00:08:12.249 Commands Supported and Effects 00:08:12.249 ============================== 00:08:12.249 Admin Commands 00:08:12.249 -------------- 00:08:12.249 Delete I/O Submission Queue (00h): Supported 00:08:12.249 Create I/O Submission Queue (01h): Supported 00:08:12.249 Get Log Page (02h): Supported 00:08:12.249 Delete I/O Completion Queue (04h): Supported 00:08:12.249 Create I/O Completion Queue (05h): Supported 00:08:12.249 Identify (06h): Supported 00:08:12.249 Abort (08h): Supported 00:08:12.249 Set Features (09h): Supported 00:08:12.249 Get Features (0Ah): Supported 00:08:12.249 Asynchronous Event Request (0Ch): Supported 00:08:12.249 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:12.249 Directive Send (19h): Supported 00:08:12.249 Directive Receive (1Ah): Supported 00:08:12.249 Virtualization Management (1Ch): Supported 00:08:12.249 Doorbell Buffer Config (7Ch): Supported 00:08:12.249 Format NVM (80h): Supported LBA-Change 00:08:12.249 I/O Commands 00:08:12.249 ------------ 00:08:12.249 Flush (00h): Supported LBA-Change 00:08:12.249 Write (01h): Supported LBA-Change 00:08:12.249 Read (02h): Supported 00:08:12.249 Compare (05h): Supported 00:08:12.249 Write Zeroes (08h): Supported LBA-Change 00:08:12.249 Dataset Management (09h): Supported LBA-Change 00:08:12.249 Unknown (0Ch): Supported 00:08:12.249 Unknown (12h): Supported 00:08:12.249 Copy (19h): Supported LBA-Change 00:08:12.249 Unknown (1Dh): Supported LBA-Change 00:08:12.249 00:08:12.249 Error Log 00:08:12.249 ========= 00:08:12.249 00:08:12.249 Arbitration 00:08:12.249 =========== 00:08:12.249 Arbitration Burst: no limit 00:08:12.249 00:08:12.249 Power Management 00:08:12.249 ================ 00:08:12.249 Number of Power States: 1 00:08:12.249 Current Power State: Power State #0 00:08:12.249 Power State #0: 00:08:12.249 Max Power: 25.00 W 00:08:12.249 Non-Operational State: Operational 00:08:12.249 Entry Latency: 16 microseconds 00:08:12.249 Exit Latency: 4 microseconds 00:08:12.249 Relative Read Throughput: 0 00:08:12.249 Relative Read Latency: 0 00:08:12.249 Relative Write Throughput: 0 00:08:12.249 Relative Write Latency: 0 00:08:12.249 Idle Power: Not Reported 00:08:12.249 Active Power: Not Reported 00:08:12.249 Non-Operational Permissive Mode: Not Supported 00:08:12.249 00:08:12.249 Health Information 00:08:12.249 ================== 00:08:12.249 Critical Warnings: 00:08:12.249 Available Spare Space: OK 00:08:12.249 Temperature: OK 00:08:12.249 Device Reliability: OK 00:08:12.249 Read Only: No 00:08:12.249 Volatile Memory Backup: OK 00:08:12.249 Current Temperature: 323 Kelvin (50 Celsius) 00:08:12.249 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:12.249 Available Spare: 0% 00:08:12.249 Available Spare Threshold: 0% 00:08:12.249 Life Percentage Used: 0% 00:08:12.249 Data Units Read: 871 00:08:12.249 Data Units Written: 800 00:08:12.249 Host Read Commands: 36942 00:08:12.249 Host Write Commands: 36365 00:08:12.249 Controller Busy Time: 0 minutes 00:08:12.249 Power Cycles: 0 00:08:12.249 Power On Hours: 0 hours 00:08:12.249 Unsafe Shutdowns: 0 00:08:12.249 Unrecoverable Media Errors: 0 00:08:12.249 Lifetime Error Log Entries: 0 00:08:12.249 Warning Temperature Time: 0 minutes 00:08:12.249 Critical Temperature Time: 0 minutes 00:08:12.249 00:08:12.249 Number of Queues 00:08:12.249 ================ 00:08:12.249 Number of I/O Submission Queues: 64 00:08:12.249 Number of I/O Completion Queues: 64 00:08:12.249 00:08:12.249 ZNS Specific Controller Data 00:08:12.249 ============================ 00:08:12.249 Zone Append Size Limit: 0 00:08:12.249 00:08:12.249 00:08:12.249 Active Namespaces 00:08:12.249 ================= 00:08:12.249 Namespace ID:1 00:08:12.249 Error Recovery Timeout: Unlimited 00:08:12.249 Command Set Identifier: NVM (00h) 00:08:12.249 Deallocate: Supported 00:08:12.249 Deallocated/Unwritten Error: Supported 00:08:12.249 Deallocated Read Value: All 0x00 00:08:12.249 Deallocate in Write Zeroes: Not Supported 00:08:12.249 Deallocated Guard Field: 0xFFFF 00:08:12.249 Flush: Supported 00:08:12.249 Reservation: Not Supported 00:08:12.249 Namespace Sharing Capabilities: Multiple Controllers 00:08:12.249 Size (in LBAs): 262144 (1GiB) 00:08:12.249 Capacity (in LBAs): 262144 (1GiB) 00:08:12.249 Utilization (in LBAs): 262144 (1GiB) 00:08:12.249 Thin Provisioning: Not Supported 00:08:12.249 Per-NS Atomic Units: No 00:08:12.249 Maximum Single Source Range Length: 128 00:08:12.249 Maximum Copy Length: 128 00:08:12.249 Maximum Source Range Count: 128 00:08:12.249 NGUID/EUI64 Never Reused: No 00:08:12.249 Namespace Write Protected: No 00:08:12.249 Endurance group ID: 1 00:08:12.249 Number of LBA Formats: 8 00:08:12.249 Current LBA Format: LBA Format #04 00:08:12.249 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:12.249 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:12.249 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:12.249 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:12.249 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:12.249 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:12.249 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:12.249 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:12.249 00:08:12.249 Get Feature FDP: 00:08:12.249 ================ 00:08:12.249 Enabled: Yes 00:08:12.249 FDP configuration index: 0 00:08:12.249 00:08:12.249 FDP configurations log page 00:08:12.249 =========================== 00:08:12.249 Number of FDP configurations: 1 00:08:12.249 Version: 0 00:08:12.249 Size: 112 00:08:12.249 FDP Configuration Descriptor: 0 00:08:12.249 Descriptor Size: 96 00:08:12.249 Reclaim Group Identifier format: 2 00:08:12.249 FDP Volatile Write Cache: Not Present 00:08:12.249 FDP Configuration: Valid 00:08:12.249 Vendor Specific Size: 0 00:08:12.249 Number of Reclaim Groups: 2 00:08:12.249 Number of Recalim Unit Handles: 8 00:08:12.249 Max Placement Identifiers: 128 00:08:12.249 Number of Namespaces Suppprted: 256 00:08:12.249 Reclaim unit Nominal Size: 6000000 bytes 00:08:12.249 Estimated Reclaim Unit Time Limit: Not Reported 00:08:12.249 RUH Desc #000: RUH Type: Initially Isolated 00:08:12.249 RUH Desc #001: RUH Type: Initially Isolated 00:08:12.249 RUH Desc #002: RUH Type: Initially Isolated 00:08:12.249 RUH Desc #003: RUH Type: Initially Isolated 00:08:12.249 RUH Desc #004: RUH Type: Initially Isolated 00:08:12.249 RUH Desc #005: RUH Type: Initially Isolated 00:08:12.249 RUH Desc #006: RUH Type: Initially Isolated 00:08:12.249 RUH Desc #007: RUH Type: Initially Isolated 00:08:12.249 00:08:12.249 FDP reclaim unit handle usage log page 00:08:12.249 ====================================== 00:08:12.249 Number of Reclaim Unit Handles: 8 00:08:12.249 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:12.249 RUH Usage Desc #001: RUH Attributes: Unused 00:08:12.249 RUH Usage Desc #002: RUH Attributes: Unused 00:08:12.249 RUH Usage Desc #003: RUH Attribu[2024-11-27 21:40:35.256223] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 74266 terminated unexpected 00:08:12.249 tes: Unused 00:08:12.249 RUH Usage Desc #004: RUH Attributes: Unused 00:08:12.249 RUH Usage Desc #005: RUH Attributes: Unused 00:08:12.249 RUH Usage Desc #006: RUH Attributes: Unused 00:08:12.249 RUH Usage Desc #007: RUH Attributes: Unused 00:08:12.249 00:08:12.249 FDP statistics log page 00:08:12.249 ======================= 00:08:12.249 Host bytes with metadata written: 516071424 00:08:12.249 Media bytes with metadata written: 516128768 00:08:12.249 Media bytes erased: 0 00:08:12.249 00:08:12.249 FDP events log page 00:08:12.249 =================== 00:08:12.249 Number of FDP events: 0 00:08:12.249 00:08:12.249 NVM Specific Namespace Data 00:08:12.249 =========================== 00:08:12.249 Logical Block Storage Tag Mask: 0 00:08:12.249 Protection Information Capabilities: 00:08:12.249 16b Guard Protection Information Storage Tag Support: No 00:08:12.249 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:12.249 Storage Tag Check Read Support: No 00:08:12.249 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.249 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.249 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.249 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.249 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.249 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.249 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.249 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.249 ===================================================== 00:08:12.249 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:12.249 ===================================================== 00:08:12.249 Controller Capabilities/Features 00:08:12.249 ================================ 00:08:12.249 Vendor ID: 1b36 00:08:12.249 Subsystem Vendor ID: 1af4 00:08:12.249 Serial Number: 12341 00:08:12.249 Model Number: QEMU NVMe Ctrl 00:08:12.249 Firmware Version: 8.0.0 00:08:12.249 Recommended Arb Burst: 6 00:08:12.249 IEEE OUI Identifier: 00 54 52 00:08:12.249 Multi-path I/O 00:08:12.249 May have multiple subsystem ports: No 00:08:12.249 May have multiple controllers: No 00:08:12.249 Associated with SR-IOV VF: No 00:08:12.249 Max Data Transfer Size: 524288 00:08:12.249 Max Number of Namespaces: 256 00:08:12.249 Max Number of I/O Queues: 64 00:08:12.249 NVMe Specification Version (VS): 1.4 00:08:12.249 NVMe Specification Version (Identify): 1.4 00:08:12.249 Maximum Queue Entries: 2048 00:08:12.249 Contiguous Queues Required: Yes 00:08:12.249 Arbitration Mechanisms Supported 00:08:12.249 Weighted Round Robin: Not Supported 00:08:12.249 Vendor Specific: Not Supported 00:08:12.249 Reset Timeout: 7500 ms 00:08:12.249 Doorbell Stride: 4 bytes 00:08:12.249 NVM Subsystem Reset: Not Supported 00:08:12.249 Command Sets Supported 00:08:12.249 NVM Command Set: Supported 00:08:12.249 Boot Partition: Not Supported 00:08:12.249 Memory Page Size Minimum: 4096 bytes 00:08:12.249 Memory Page Size Maximum: 65536 bytes 00:08:12.249 Persistent Memory Region: Not Supported 00:08:12.249 Optional Asynchronous Events Supported 00:08:12.249 Namespace Attribute Notices: Supported 00:08:12.249 Firmware Activation Notices: Not Supported 00:08:12.249 ANA Change Notices: Not Supported 00:08:12.249 PLE Aggregate Log Change Notices: Not Supported 00:08:12.249 LBA Status Info Alert Notices: Not Supported 00:08:12.249 EGE Aggregate Log Change Notices: Not Supported 00:08:12.249 Normal NVM Subsystem Shutdown event: Not Supported 00:08:12.249 Zone Descriptor Change Notices: Not Supported 00:08:12.249 Discovery Log Change Notices: Not Supported 00:08:12.249 Controller Attributes 00:08:12.249 128-bit Host Identifier: Not Supported 00:08:12.249 Non-Operational Permissive Mode: Not Supported 00:08:12.249 NVM Sets: Not Supported 00:08:12.249 Read Recovery Levels: Not Supported 00:08:12.249 Endurance Groups: Not Supported 00:08:12.249 Predictable Latency Mode: Not Supported 00:08:12.250 Traffic Based Keep ALive: Not Supported 00:08:12.250 Namespace Granularity: Not Supported 00:08:12.250 SQ Associations: Not Supported 00:08:12.250 UUID List: Not Supported 00:08:12.250 Multi-Domain Subsystem: Not Supported 00:08:12.250 Fixed Capacity Management: Not Supported 00:08:12.250 Variable Capacity Management: Not Supported 00:08:12.250 Delete Endurance Group: Not Supported 00:08:12.250 Delete NVM Set: Not Supported 00:08:12.250 Extended LBA Formats Supported: Supported 00:08:12.250 Flexible Data Placement Supported: Not Supported 00:08:12.250 00:08:12.250 Controller Memory Buffer Support 00:08:12.250 ================================ 00:08:12.250 Supported: No 00:08:12.250 00:08:12.250 Persistent Memory Region Support 00:08:12.250 ================================ 00:08:12.250 Supported: No 00:08:12.250 00:08:12.250 Admin Command Set Attributes 00:08:12.250 ============================ 00:08:12.250 Security Send/Receive: Not Supported 00:08:12.250 Format NVM: Supported 00:08:12.250 Firmware Activate/Download: Not Supported 00:08:12.250 Namespace Management: Supported 00:08:12.250 Device Self-Test: Not Supported 00:08:12.250 Directives: Supported 00:08:12.250 NVMe-MI: Not Supported 00:08:12.250 Virtualization Management: Not Supported 00:08:12.250 Doorbell Buffer Config: Supported 00:08:12.250 Get LBA Status Capability: Not Supported 00:08:12.250 Command & Feature Lockdown Capability: Not Supported 00:08:12.250 Abort Command Limit: 4 00:08:12.250 Async Event Request Limit: 4 00:08:12.250 Number of Firmware Slots: N/A 00:08:12.250 Firmware Slot 1 Read-Only: N/A 00:08:12.250 Firmware Activation Without Reset: N/A 00:08:12.250 Multiple Update Detection Support: N/A 00:08:12.250 Firmware Update Granularity: No Information Provided 00:08:12.250 Per-Namespace SMART Log: Yes 00:08:12.250 Asymmetric Namespace Access Log Page: Not Supported 00:08:12.250 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:12.250 Command Effects Log Page: Supported 00:08:12.250 Get Log Page Extended Data: Supported 00:08:12.250 Telemetry Log Pages: Not Supported 00:08:12.250 Persistent Event Log Pages: Not Supported 00:08:12.250 Supported Log Pages Log Page: May Support 00:08:12.250 Commands Supported & Effects Log Page: Not Supported 00:08:12.250 Feature Identifiers & Effects Log Page:May Support 00:08:12.250 NVMe-MI Commands & Effects Log Page: May Support 00:08:12.250 Data Area 4 for Telemetry Log: Not Supported 00:08:12.250 Error Log Page Entries Supported: 1 00:08:12.250 Keep Alive: Not Supported 00:08:12.250 00:08:12.250 NVM Command Set Attributes 00:08:12.250 ========================== 00:08:12.250 Submission Queue Entry Size 00:08:12.250 Max: 64 00:08:12.250 Min: 64 00:08:12.250 Completion Queue Entry Size 00:08:12.250 Max: 16 00:08:12.250 Min: 16 00:08:12.250 Number of Namespaces: 256 00:08:12.250 Compare Command: Supported 00:08:12.250 Write Uncorrectable Command: Not Supported 00:08:12.250 Dataset Management Command: Supported 00:08:12.250 Write Zeroes Command: Supported 00:08:12.250 Set Features Save Field: Supported 00:08:12.250 Reservations: Not Supported 00:08:12.250 Timestamp: Supported 00:08:12.250 Copy: Supported 00:08:12.250 Volatile Write Cache: Present 00:08:12.250 Atomic Write Unit (Normal): 1 00:08:12.250 Atomic Write Unit (PFail): 1 00:08:12.250 Atomic Compare & Write Unit: 1 00:08:12.250 Fused Compare & Write: Not Supported 00:08:12.250 Scatter-Gather List 00:08:12.250 SGL Command Set: Supported 00:08:12.250 SGL Keyed: Not Supported 00:08:12.250 SGL Bit Bucket Descriptor: Not Supported 00:08:12.250 SGL Metadata Pointer: Not Supported 00:08:12.250 Oversized SGL: Not Supported 00:08:12.250 SGL Metadata Address: Not Supported 00:08:12.250 SGL Offset: Not Supported 00:08:12.250 Transport SGL Data Block: Not Supported 00:08:12.250 Replay Protected Memory Block: Not Supported 00:08:12.250 00:08:12.250 Firmware Slot Information 00:08:12.250 ========================= 00:08:12.250 Active slot: 1 00:08:12.250 Slot 1 Firmware Revision: 1.0 00:08:12.250 00:08:12.250 00:08:12.250 Commands Supported and Effects 00:08:12.250 ============================== 00:08:12.250 Admin Commands 00:08:12.250 -------------- 00:08:12.250 Delete I/O Submission Queue (00h): Supported 00:08:12.250 Create I/O Submission Queue (01h): Supported 00:08:12.250 Get Log Page (02h): Supported 00:08:12.250 Delete I/O Completion Queue (04h): Supported 00:08:12.250 Create I/O Completion Queue (05h): Supported 00:08:12.250 Identify (06h): Supported 00:08:12.250 Abort (08h): Supported 00:08:12.250 Set Features (09h): Supported 00:08:12.250 Get Features (0Ah): Supported 00:08:12.250 Asynchronous Event Request (0Ch): Supported 00:08:12.250 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:12.250 Directive Send (19h): Supported 00:08:12.250 Directive Receive (1Ah): Supported 00:08:12.250 Virtualization Management (1Ch): Supported 00:08:12.250 Doorbell Buffer Config (7Ch): Supported 00:08:12.250 Format NVM (80h): Supported LBA-Change 00:08:12.250 I/O Commands 00:08:12.250 ------------ 00:08:12.250 Flush (00h): Supported LBA-Change 00:08:12.250 Write (01h): Supported LBA-Change 00:08:12.250 Read (02h): Supported 00:08:12.250 Compare (05h): Supported 00:08:12.250 Write Zeroes (08h): Supported LBA-Change 00:08:12.250 Dataset Management (09h): Supported LBA-Change 00:08:12.250 Unknown (0Ch): Supported 00:08:12.250 Unknown (12h): Supported 00:08:12.250 Copy (19h): Supported LBA-Change 00:08:12.250 Unknown (1Dh): Supported LBA-Change 00:08:12.250 00:08:12.250 Error Log 00:08:12.250 ========= 00:08:12.250 00:08:12.250 Arbitration 00:08:12.250 =========== 00:08:12.250 Arbitration Burst: no limit 00:08:12.250 00:08:12.250 Power Management 00:08:12.250 ================ 00:08:12.250 Number of Power States: 1 00:08:12.250 Current Power State: Power State #0 00:08:12.250 Power State #0: 00:08:12.250 Max Power: 25.00 W 00:08:12.250 Non-Operational State: Operational 00:08:12.250 Entry Latency: 16 microseconds 00:08:12.250 Exit Latency: 4 microseconds 00:08:12.250 Relative Read Throughput: 0 00:08:12.250 Relative Read Latency: 0 00:08:12.250 Relative Write Throughput: 0 00:08:12.250 Relative Write Latency: 0 00:08:12.250 Idle Power: Not Reported 00:08:12.250 Active Power: Not Reported 00:08:12.250 Non-Operational Permissive Mode: Not Supported 00:08:12.250 00:08:12.250 Health Information 00:08:12.250 ================== 00:08:12.250 Critical Warnings: 00:08:12.250 Available Spare Space: OK 00:08:12.250 Temperature: OK 00:08:12.250 Device Reliability: OK 00:08:12.250 Read Only: No 00:08:12.250 Volatile Memory Backup: OK 00:08:12.250 Current Temperature: 323 Kelvin (50 Celsius) 00:08:12.250 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:12.250 Available Spare: 0% 00:08:12.250 Available Spare Threshold: 0% 00:08:12.250 Life Percentage Used: 0% 00:08:12.250 Data Units Read: 986 00:08:12.250 Data Units Written: 852 00:08:12.250 Host Read Commands: 51564 00:08:12.250 Host Write Commands: 50350 00:08:12.250 Controller Busy Time: 0 minutes 00:08:12.250 Power Cycles: 0 00:08:12.250 Power On Hours: 0 hours 00:08:12.250 Unsafe Shutdowns: 0 00:08:12.250 Unrecoverable Media Errors: 0 00:08:12.250 Lifetime Error Log Entries: 0 00:08:12.250 Warning Temperature Time: 0 minutes 00:08:12.250 Critical Temperature Time: 0 minutes 00:08:12.250 00:08:12.250 Number of Queues 00:08:12.250 ================ 00:08:12.250 Number of I/O Submission Queues: 64 00:08:12.250 Number of I/O Completion Queues: 64 00:08:12.250 00:08:12.250 ZNS Specific Controller Data 00:08:12.250 ============================ 00:08:12.250 Zone Append Size Limit: 0 00:08:12.250 00:08:12.250 00:08:12.250 Active Namespaces 00:08:12.250 ================= 00:08:12.250 Namespace ID:1 00:08:12.250 Error Recovery Timeout: Unlimited 00:08:12.250 Command Set Identifier: NVM (00h) 00:08:12.250 Deallocate: Supported 00:08:12.250 Deallocated/Unwritten Error: Supported 00:08:12.250 Deallocated Read Value: All 0x00 00:08:12.250 Deallocate in Write Zeroes: Not Supported 00:08:12.250 Deallocated Guard Field: 0xFFFF 00:08:12.250 Flush: Supported 00:08:12.250 Reservation: Not Supported 00:08:12.250 Namespace Sharing Capabilities: Private 00:08:12.250 Size (in LBAs): 1310720 (5GiB) 00:08:12.250 Capacity (in LBAs): 1310720 (5GiB) 00:08:12.250 Utilization (in LBAs): 1310720 (5GiB) 00:08:12.250 Thin Provisioning: Not Supported 00:08:12.250 Per-NS Atomic Units: No 00:08:12.250 Maximum Single Source Range Length: 128 00:08:12.250 Maximum Copy Length: 128 00:08:12.250 Maximum Source Range Count: 128 00:08:12.250 NGUID/EUI64 Never Reused: No 00:08:12.250 Namespace Write Protected: No 00:08:12.250 Number of LBA Formats: 8 00:08:12.250 Current LBA Format: LBA Format #04 00:08:12.250 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:12.250 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:12.250 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:12.250 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:12.250 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:12.250 LBA Format[2024-11-27 21:40:35.258573] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 74266 terminated unexpected 00:08:12.250 #05: Data Size: 4096 Metadata Size: 8 00:08:12.250 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:12.250 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:12.250 00:08:12.250 NVM Specific Namespace Data 00:08:12.250 =========================== 00:08:12.250 Logical Block Storage Tag Mask: 0 00:08:12.250 Protection Information Capabilities: 00:08:12.250 16b Guard Protection Information Storage Tag Support: No 00:08:12.250 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:12.250 Storage Tag Check Read Support: No 00:08:12.250 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.250 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.250 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.250 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.250 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.250 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.250 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.250 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.250 ===================================================== 00:08:12.250 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:12.250 ===================================================== 00:08:12.250 Controller Capabilities/Features 00:08:12.250 ================================ 00:08:12.250 Vendor ID: 1b36 00:08:12.250 Subsystem Vendor ID: 1af4 00:08:12.250 Serial Number: 12342 00:08:12.250 Model Number: QEMU NVMe Ctrl 00:08:12.250 Firmware Version: 8.0.0 00:08:12.250 Recommended Arb Burst: 6 00:08:12.250 IEEE OUI Identifier: 00 54 52 00:08:12.250 Multi-path I/O 00:08:12.250 May have multiple subsystem ports: No 00:08:12.250 May have multiple controllers: No 00:08:12.250 Associated with SR-IOV VF: No 00:08:12.250 Max Data Transfer Size: 524288 00:08:12.250 Max Number of Namespaces: 256 00:08:12.250 Max Number of I/O Queues: 64 00:08:12.250 NVMe Specification Version (VS): 1.4 00:08:12.250 NVMe Specification Version (Identify): 1.4 00:08:12.250 Maximum Queue Entries: 2048 00:08:12.250 Contiguous Queues Required: Yes 00:08:12.250 Arbitration Mechanisms Supported 00:08:12.250 Weighted Round Robin: Not Supported 00:08:12.250 Vendor Specific: Not Supported 00:08:12.250 Reset Timeout: 7500 ms 00:08:12.250 Doorbell Stride: 4 bytes 00:08:12.250 NVM Subsystem Reset: Not Supported 00:08:12.250 Command Sets Supported 00:08:12.250 NVM Command Set: Supported 00:08:12.250 Boot Partition: Not Supported 00:08:12.250 Memory Page Size Minimum: 4096 bytes 00:08:12.250 Memory Page Size Maximum: 65536 bytes 00:08:12.250 Persistent Memory Region: Not Supported 00:08:12.250 Optional Asynchronous Events Supported 00:08:12.250 Namespace Attribute Notices: Supported 00:08:12.250 Firmware Activation Notices: Not Supported 00:08:12.250 ANA Change Notices: Not Supported 00:08:12.250 PLE Aggregate Log Change Notices: Not Supported 00:08:12.250 LBA Status Info Alert Notices: Not Supported 00:08:12.250 EGE Aggregate Log Change Notices: Not Supported 00:08:12.250 Normal NVM Subsystem Shutdown event: Not Supported 00:08:12.250 Zone Descriptor Change Notices: Not Supported 00:08:12.250 Discovery Log Change Notices: Not Supported 00:08:12.250 Controller Attributes 00:08:12.250 128-bit Host Identifier: Not Supported 00:08:12.250 Non-Operational Permissive Mode: Not Supported 00:08:12.250 NVM Sets: Not Supported 00:08:12.250 Read Recovery Levels: Not Supported 00:08:12.250 Endurance Groups: Not Supported 00:08:12.250 Predictable Latency Mode: Not Supported 00:08:12.250 Traffic Based Keep ALive: Not Supported 00:08:12.250 Namespace Granularity: Not Supported 00:08:12.250 SQ Associations: Not Supported 00:08:12.250 UUID List: Not Supported 00:08:12.250 Multi-Domain Subsystem: Not Supported 00:08:12.250 Fixed Capacity Management: Not Supported 00:08:12.250 Variable Capacity Management: Not Supported 00:08:12.250 Delete Endurance Group: Not Supported 00:08:12.250 Delete NVM Set: Not Supported 00:08:12.250 Extended LBA Formats Supported: Supported 00:08:12.250 Flexible Data Placement Supported: Not Supported 00:08:12.250 00:08:12.250 Controller Memory Buffer Support 00:08:12.250 ================================ 00:08:12.250 Supported: No 00:08:12.250 00:08:12.250 Persistent Memory Region Support 00:08:12.250 ================================ 00:08:12.250 Supported: No 00:08:12.250 00:08:12.250 Admin Command Set Attributes 00:08:12.250 ============================ 00:08:12.250 Security Send/Receive: Not Supported 00:08:12.250 Format NVM: Supported 00:08:12.250 Firmware Activate/Download: Not Supported 00:08:12.250 Namespace Management: Supported 00:08:12.250 Device Self-Test: Not Supported 00:08:12.250 Directives: Supported 00:08:12.250 NVMe-MI: Not Supported 00:08:12.250 Virtualization Management: Not Supported 00:08:12.250 Doorbell Buffer Config: Supported 00:08:12.250 Get LBA Status Capability: Not Supported 00:08:12.250 Command & Feature Lockdown Capability: Not Supported 00:08:12.250 Abort Command Limit: 4 00:08:12.250 Async Event Request Limit: 4 00:08:12.250 Number of Firmware Slots: N/A 00:08:12.250 Firmware Slot 1 Read-Only: N/A 00:08:12.250 Firmware Activation Without Reset: N/A 00:08:12.250 Multiple Update Detection Support: N/A 00:08:12.250 Firmware Update Granularity: No Information Provided 00:08:12.250 Per-Namespace SMART Log: Yes 00:08:12.250 Asymmetric Namespace Access Log Page: Not Supported 00:08:12.250 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:12.250 Command Effects Log Page: Supported 00:08:12.250 Get Log Page Extended Data: Supported 00:08:12.250 Telemetry Log Pages: Not Supported 00:08:12.250 Persistent Event Log Pages: Not Supported 00:08:12.250 Supported Log Pages Log Page: May Support 00:08:12.250 Commands Supported & Effects Log Page: Not Supported 00:08:12.250 Feature Identifiers & Effects Log Page:May Support 00:08:12.250 NVMe-MI Commands & Effects Log Page: May Support 00:08:12.250 Data Area 4 for Telemetry Log: Not Supported 00:08:12.250 Error Log Page Entries Supported: 1 00:08:12.250 Keep Alive: Not Supported 00:08:12.250 00:08:12.250 NVM Command Set Attributes 00:08:12.250 ========================== 00:08:12.250 Submission Queue Entry Size 00:08:12.250 Max: 64 00:08:12.250 Min: 64 00:08:12.250 Completion Queue Entry Size 00:08:12.250 Max: 16 00:08:12.250 Min: 16 00:08:12.250 Number of Namespaces: 256 00:08:12.250 Compare Command: Supported 00:08:12.250 Write Uncorrectable Command: Not Supported 00:08:12.250 Dataset Management Command: Supported 00:08:12.250 Write Zeroes Command: Supported 00:08:12.250 Set Features Save Field: Supported 00:08:12.250 Reservations: Not Supported 00:08:12.250 Timestamp: Supported 00:08:12.250 Copy: Supported 00:08:12.250 Volatile Write Cache: Present 00:08:12.250 Atomic Write Unit (Normal): 1 00:08:12.250 Atomic Write Unit (PFail): 1 00:08:12.250 Atomic Compare & Write Unit: 1 00:08:12.250 Fused Compare & Write: Not Supported 00:08:12.250 Scatter-Gather List 00:08:12.250 SGL Command Set: Supported 00:08:12.250 SGL Keyed: Not Supported 00:08:12.250 SGL Bit Bucket Descriptor: Not Supported 00:08:12.250 SGL Metadata Pointer: Not Supported 00:08:12.250 Oversized SGL: Not Supported 00:08:12.250 SGL Metadata Address: Not Supported 00:08:12.250 SGL Offset: Not Supported 00:08:12.250 Transport SGL Data Block: Not Supported 00:08:12.250 Replay Protected Memory Block: Not Supported 00:08:12.250 00:08:12.250 Firmware Slot Information 00:08:12.250 ========================= 00:08:12.250 Active slot: 1 00:08:12.250 Slot 1 Firmware Revision: 1.0 00:08:12.250 00:08:12.250 00:08:12.250 Commands Supported and Effects 00:08:12.250 ============================== 00:08:12.250 Admin Commands 00:08:12.250 -------------- 00:08:12.250 Delete I/O Submission Queue (00h): Supported 00:08:12.250 Create I/O Submission Queue (01h): Supported 00:08:12.250 Get Log Page (02h): Supported 00:08:12.250 Delete I/O Completion Queue (04h): Supported 00:08:12.250 Create I/O Completion Queue (05h): Supported 00:08:12.250 Identify (06h): Supported 00:08:12.250 Abort (08h): Supported 00:08:12.250 Set Features (09h): Supported 00:08:12.250 Get Features (0Ah): Supported 00:08:12.250 Asynchronous Event Request (0Ch): Supported 00:08:12.250 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:12.250 Directive Send (19h): Supported 00:08:12.250 Directive Receive (1Ah): Supported 00:08:12.250 Virtualization Management (1Ch): Supported 00:08:12.250 Doorbell Buffer Config (7Ch): Supported 00:08:12.250 Format NVM (80h): Supported LBA-Change 00:08:12.250 I/O Commands 00:08:12.250 ------------ 00:08:12.250 Flush (00h): Supported LBA-Change 00:08:12.250 Write (01h): Supported LBA-Change 00:08:12.250 Read (02h): Supported 00:08:12.250 Compare (05h): Supported 00:08:12.250 Write Zeroes (08h): Supported LBA-Change 00:08:12.250 Dataset Management (09h): Supported LBA-Change 00:08:12.250 Unknown (0Ch): Supported 00:08:12.250 Unknown (12h): Supported 00:08:12.250 Copy (19h): Supported LBA-Change 00:08:12.250 Unknown (1Dh): Supported LBA-Change 00:08:12.250 00:08:12.250 Error Log 00:08:12.250 ========= 00:08:12.250 00:08:12.250 Arbitration 00:08:12.250 =========== 00:08:12.251 Arbitration Burst: no limit 00:08:12.251 00:08:12.251 Power Management 00:08:12.251 ================ 00:08:12.251 Number of Power States: 1 00:08:12.251 Current Power State: Power State #0 00:08:12.251 Power State #0: 00:08:12.251 Max Power: 25.00 W 00:08:12.251 Non-Operational State: Operational 00:08:12.251 Entry Latency: 16 microseconds 00:08:12.251 Exit Latency: 4 microseconds 00:08:12.251 Relative Read Throughput: 0 00:08:12.251 Relative Read Latency: 0 00:08:12.251 Relative Write Throughput: 0 00:08:12.251 Relative Write Latency: 0 00:08:12.251 Idle Power: Not Reported 00:08:12.251 Active Power: Not Reported 00:08:12.251 Non-Operational Permissive Mode: Not Supported 00:08:12.251 00:08:12.251 Health Information 00:08:12.251 ================== 00:08:12.251 Critical Warnings: 00:08:12.251 Available Spare Space: OK 00:08:12.251 Temperature: OK 00:08:12.251 Device Reliability: OK 00:08:12.251 Read Only: No 00:08:12.251 Volatile Memory Backup: OK 00:08:12.251 Current Temperature: 323 Kelvin (50 Celsius) 00:08:12.251 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:12.251 Available Spare: 0% 00:08:12.251 Available Spare Threshold: 0% 00:08:12.251 Life Percentage Used: 0% 00:08:12.251 Data Units Read: 2082 00:08:12.251 Data Units Written: 1869 00:08:12.251 Host Read Commands: 106267 00:08:12.251 Host Write Commands: 104536 00:08:12.251 Controller Busy Time: 0 minutes 00:08:12.251 Power Cycles: 0 00:08:12.251 Power On Hours: 0 hours 00:08:12.251 Unsafe Shutdowns: 0 00:08:12.251 Unrecoverable Media Errors: 0 00:08:12.251 Lifetime Error Log Entries: 0 00:08:12.251 Warning Temperature Time: 0 minutes 00:08:12.251 Critical Temperature Time: 0 minutes 00:08:12.251 00:08:12.251 Number of Queues 00:08:12.251 ================ 00:08:12.251 Number of I/O Submission Queues: 64 00:08:12.251 Number of I/O Completion Queues: 64 00:08:12.251 00:08:12.251 ZNS Specific Controller Data 00:08:12.251 ============================ 00:08:12.251 Zone Append Size Limit: 0 00:08:12.251 00:08:12.251 00:08:12.251 Active Namespaces 00:08:12.251 ================= 00:08:12.251 Namespace ID:1 00:08:12.251 Error Recovery Timeout: Unlimited 00:08:12.251 Command Set Identifier: NVM (00h) 00:08:12.251 Deallocate: Supported 00:08:12.251 Deallocated/Unwritten Error: Supported 00:08:12.251 Deallocated Read Value: All 0x00 00:08:12.251 Deallocate in Write Zeroes: Not Supported 00:08:12.251 Deallocated Guard Field: 0xFFFF 00:08:12.251 Flush: Supported 00:08:12.251 Reservation: Not Supported 00:08:12.251 Namespace Sharing Capabilities: Private 00:08:12.251 Size (in LBAs): 1048576 (4GiB) 00:08:12.251 Capacity (in LBAs): 1048576 (4GiB) 00:08:12.251 Utilization (in LBAs): 1048576 (4GiB) 00:08:12.251 Thin Provisioning: Not Supported 00:08:12.251 Per-NS Atomic Units: No 00:08:12.251 Maximum Single Source Range Length: 128 00:08:12.251 Maximum Copy Length: 128 00:08:12.251 Maximum Source Range Count: 128 00:08:12.251 NGUID/EUI64 Never Reused: No 00:08:12.251 Namespace Write Protected: No 00:08:12.251 Number of LBA Formats: 8 00:08:12.251 Current LBA Format: LBA Format #04 00:08:12.251 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:12.251 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:12.251 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:12.251 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:12.251 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:12.251 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:12.251 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:12.251 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:12.251 00:08:12.251 NVM Specific Namespace Data 00:08:12.251 =========================== 00:08:12.251 Logical Block Storage Tag Mask: 0 00:08:12.251 Protection Information Capabilities: 00:08:12.251 16b Guard Protection Information Storage Tag Support: No 00:08:12.251 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:12.251 Storage Tag Check Read Support: No 00:08:12.251 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.251 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.251 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.251 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.251 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.251 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.251 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.251 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.251 Namespace ID:2 00:08:12.251 Error Recovery Timeout: Unlimited 00:08:12.251 Command Set Identifier: NVM (00h) 00:08:12.251 Deallocate: Supported 00:08:12.251 Deallocated/Unwritten Error: Supported 00:08:12.251 Deallocated Read Value: All 0x00 00:08:12.251 Deallocate in Write Zeroes: Not Supported 00:08:12.251 Deallocated Guard Field: 0xFFFF 00:08:12.251 Flush: Supported 00:08:12.251 Reservation: Not Supported 00:08:12.251 Namespace Sharing Capabilities: Private 00:08:12.251 Size (in LBAs): 1048576 (4GiB) 00:08:12.251 Capacity (in LBAs): 1048576 (4GiB) 00:08:12.251 Utilization (in LBAs): 1048576 (4GiB) 00:08:12.251 Thin Provisioning: Not Supported 00:08:12.251 Per-NS Atomic Units: No 00:08:12.251 Maximum Single Source Range Length: 128 00:08:12.251 Maximum Copy Length: 128 00:08:12.251 Maximum Source Range Count: 128 00:08:12.251 NGUID/EUI64 Never Reused: No 00:08:12.251 Namespace Write Protected: No 00:08:12.251 Number of LBA Formats: 8 00:08:12.251 Current LBA Format: LBA Format #04 00:08:12.251 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:12.251 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:12.251 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:12.251 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:12.251 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:12.251 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:12.251 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:12.251 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:12.251 00:08:12.251 NVM Specific Namespace Data 00:08:12.251 =========================== 00:08:12.251 Logical Block Storage Tag Mask: 0 00:08:12.251 Protection Information Capabilities: 00:08:12.251 16b Guard Protection Information Storage Tag Support: No 00:08:12.251 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:12.251 Storage Tag Check Read Support: No 00:08:12.251 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.251 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.251 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.251 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.251 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.251 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.251 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.251 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.251 Namespace ID:3 00:08:12.251 Error Recovery Timeout: Unlimited 00:08:12.251 Command Set Identifier: NVM (00h) 00:08:12.251 Deallocate: Supported 00:08:12.251 Deallocated/Unwritten Error: Supported 00:08:12.251 Deallocated Read Value: All 0x00 00:08:12.251 Deallocate in Write Zeroes: Not Supported 00:08:12.251 Deallocated Guard Field: 0xFFFF 00:08:12.251 Flush: Supported 00:08:12.251 Reservation: Not Supported 00:08:12.251 Namespace Sharing Capabilities: Private 00:08:12.251 Size (in LBAs): 1048576 (4GiB) 00:08:12.251 Capacity (in LBAs): 1048576 (4GiB) 00:08:12.251 Utilization (in LBAs): 1048576 (4GiB) 00:08:12.251 Thin Provisioning: Not Supported 00:08:12.251 Per-NS Atomic Units: No 00:08:12.251 Maximum Single Source Range Length: 128 00:08:12.251 Maximum Copy Length: 128 00:08:12.251 Maximum Source Range Count: 128 00:08:12.251 NGUID/EUI64 Never Reused: No 00:08:12.251 Namespace Write Protected: No 00:08:12.251 Number of LBA Formats: 8 00:08:12.251 Current LBA Format: LBA Format #04 00:08:12.251 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:12.251 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:12.251 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:12.251 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:12.251 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:12.251 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:12.251 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:12.251 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:12.251 00:08:12.251 NVM Specific Namespace Data 00:08:12.251 =========================== 00:08:12.251 Logical Block Storage Tag Mask: 0 00:08:12.251 Protection Information Capabilities: 00:08:12.251 16b Guard Protection Information Storage Tag Support: No 00:08:12.251 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:12.251 Storage Tag Check Read Support: No 00:08:12.251 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.251 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.251 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.251 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.251 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.251 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.251 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.251 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.251 21:40:35 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:12.251 21:40:35 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:12.512 ===================================================== 00:08:12.512 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:12.512 ===================================================== 00:08:12.512 Controller Capabilities/Features 00:08:12.512 ================================ 00:08:12.512 Vendor ID: 1b36 00:08:12.512 Subsystem Vendor ID: 1af4 00:08:12.512 Serial Number: 12340 00:08:12.512 Model Number: QEMU NVMe Ctrl 00:08:12.512 Firmware Version: 8.0.0 00:08:12.512 Recommended Arb Burst: 6 00:08:12.512 IEEE OUI Identifier: 00 54 52 00:08:12.512 Multi-path I/O 00:08:12.512 May have multiple subsystem ports: No 00:08:12.512 May have multiple controllers: No 00:08:12.512 Associated with SR-IOV VF: No 00:08:12.512 Max Data Transfer Size: 524288 00:08:12.512 Max Number of Namespaces: 256 00:08:12.512 Max Number of I/O Queues: 64 00:08:12.512 NVMe Specification Version (VS): 1.4 00:08:12.512 NVMe Specification Version (Identify): 1.4 00:08:12.512 Maximum Queue Entries: 2048 00:08:12.512 Contiguous Queues Required: Yes 00:08:12.512 Arbitration Mechanisms Supported 00:08:12.512 Weighted Round Robin: Not Supported 00:08:12.512 Vendor Specific: Not Supported 00:08:12.512 Reset Timeout: 7500 ms 00:08:12.512 Doorbell Stride: 4 bytes 00:08:12.512 NVM Subsystem Reset: Not Supported 00:08:12.512 Command Sets Supported 00:08:12.512 NVM Command Set: Supported 00:08:12.512 Boot Partition: Not Supported 00:08:12.512 Memory Page Size Minimum: 4096 bytes 00:08:12.512 Memory Page Size Maximum: 65536 bytes 00:08:12.512 Persistent Memory Region: Not Supported 00:08:12.512 Optional Asynchronous Events Supported 00:08:12.512 Namespace Attribute Notices: Supported 00:08:12.512 Firmware Activation Notices: Not Supported 00:08:12.512 ANA Change Notices: Not Supported 00:08:12.512 PLE Aggregate Log Change Notices: Not Supported 00:08:12.512 LBA Status Info Alert Notices: Not Supported 00:08:12.512 EGE Aggregate Log Change Notices: Not Supported 00:08:12.512 Normal NVM Subsystem Shutdown event: Not Supported 00:08:12.512 Zone Descriptor Change Notices: Not Supported 00:08:12.512 Discovery Log Change Notices: Not Supported 00:08:12.512 Controller Attributes 00:08:12.512 128-bit Host Identifier: Not Supported 00:08:12.512 Non-Operational Permissive Mode: Not Supported 00:08:12.512 NVM Sets: Not Supported 00:08:12.512 Read Recovery Levels: Not Supported 00:08:12.512 Endurance Groups: Not Supported 00:08:12.512 Predictable Latency Mode: Not Supported 00:08:12.512 Traffic Based Keep ALive: Not Supported 00:08:12.512 Namespace Granularity: Not Supported 00:08:12.512 SQ Associations: Not Supported 00:08:12.512 UUID List: Not Supported 00:08:12.512 Multi-Domain Subsystem: Not Supported 00:08:12.512 Fixed Capacity Management: Not Supported 00:08:12.512 Variable Capacity Management: Not Supported 00:08:12.512 Delete Endurance Group: Not Supported 00:08:12.512 Delete NVM Set: Not Supported 00:08:12.512 Extended LBA Formats Supported: Supported 00:08:12.512 Flexible Data Placement Supported: Not Supported 00:08:12.512 00:08:12.512 Controller Memory Buffer Support 00:08:12.512 ================================ 00:08:12.512 Supported: No 00:08:12.512 00:08:12.512 Persistent Memory Region Support 00:08:12.512 ================================ 00:08:12.512 Supported: No 00:08:12.512 00:08:12.512 Admin Command Set Attributes 00:08:12.512 ============================ 00:08:12.512 Security Send/Receive: Not Supported 00:08:12.512 Format NVM: Supported 00:08:12.512 Firmware Activate/Download: Not Supported 00:08:12.512 Namespace Management: Supported 00:08:12.513 Device Self-Test: Not Supported 00:08:12.513 Directives: Supported 00:08:12.513 NVMe-MI: Not Supported 00:08:12.513 Virtualization Management: Not Supported 00:08:12.513 Doorbell Buffer Config: Supported 00:08:12.513 Get LBA Status Capability: Not Supported 00:08:12.513 Command & Feature Lockdown Capability: Not Supported 00:08:12.513 Abort Command Limit: 4 00:08:12.513 Async Event Request Limit: 4 00:08:12.513 Number of Firmware Slots: N/A 00:08:12.513 Firmware Slot 1 Read-Only: N/A 00:08:12.513 Firmware Activation Without Reset: N/A 00:08:12.513 Multiple Update Detection Support: N/A 00:08:12.513 Firmware Update Granularity: No Information Provided 00:08:12.513 Per-Namespace SMART Log: Yes 00:08:12.513 Asymmetric Namespace Access Log Page: Not Supported 00:08:12.513 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:12.513 Command Effects Log Page: Supported 00:08:12.513 Get Log Page Extended Data: Supported 00:08:12.513 Telemetry Log Pages: Not Supported 00:08:12.513 Persistent Event Log Pages: Not Supported 00:08:12.513 Supported Log Pages Log Page: May Support 00:08:12.513 Commands Supported & Effects Log Page: Not Supported 00:08:12.513 Feature Identifiers & Effects Log Page:May Support 00:08:12.513 NVMe-MI Commands & Effects Log Page: May Support 00:08:12.513 Data Area 4 for Telemetry Log: Not Supported 00:08:12.513 Error Log Page Entries Supported: 1 00:08:12.513 Keep Alive: Not Supported 00:08:12.513 00:08:12.513 NVM Command Set Attributes 00:08:12.513 ========================== 00:08:12.513 Submission Queue Entry Size 00:08:12.513 Max: 64 00:08:12.513 Min: 64 00:08:12.513 Completion Queue Entry Size 00:08:12.513 Max: 16 00:08:12.513 Min: 16 00:08:12.513 Number of Namespaces: 256 00:08:12.513 Compare Command: Supported 00:08:12.513 Write Uncorrectable Command: Not Supported 00:08:12.513 Dataset Management Command: Supported 00:08:12.513 Write Zeroes Command: Supported 00:08:12.513 Set Features Save Field: Supported 00:08:12.513 Reservations: Not Supported 00:08:12.513 Timestamp: Supported 00:08:12.513 Copy: Supported 00:08:12.513 Volatile Write Cache: Present 00:08:12.513 Atomic Write Unit (Normal): 1 00:08:12.513 Atomic Write Unit (PFail): 1 00:08:12.513 Atomic Compare & Write Unit: 1 00:08:12.513 Fused Compare & Write: Not Supported 00:08:12.513 Scatter-Gather List 00:08:12.513 SGL Command Set: Supported 00:08:12.513 SGL Keyed: Not Supported 00:08:12.513 SGL Bit Bucket Descriptor: Not Supported 00:08:12.513 SGL Metadata Pointer: Not Supported 00:08:12.513 Oversized SGL: Not Supported 00:08:12.513 SGL Metadata Address: Not Supported 00:08:12.513 SGL Offset: Not Supported 00:08:12.513 Transport SGL Data Block: Not Supported 00:08:12.513 Replay Protected Memory Block: Not Supported 00:08:12.513 00:08:12.513 Firmware Slot Information 00:08:12.513 ========================= 00:08:12.513 Active slot: 1 00:08:12.513 Slot 1 Firmware Revision: 1.0 00:08:12.513 00:08:12.513 00:08:12.513 Commands Supported and Effects 00:08:12.513 ============================== 00:08:12.513 Admin Commands 00:08:12.513 -------------- 00:08:12.513 Delete I/O Submission Queue (00h): Supported 00:08:12.513 Create I/O Submission Queue (01h): Supported 00:08:12.513 Get Log Page (02h): Supported 00:08:12.513 Delete I/O Completion Queue (04h): Supported 00:08:12.513 Create I/O Completion Queue (05h): Supported 00:08:12.513 Identify (06h): Supported 00:08:12.513 Abort (08h): Supported 00:08:12.513 Set Features (09h): Supported 00:08:12.513 Get Features (0Ah): Supported 00:08:12.513 Asynchronous Event Request (0Ch): Supported 00:08:12.513 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:12.513 Directive Send (19h): Supported 00:08:12.513 Directive Receive (1Ah): Supported 00:08:12.513 Virtualization Management (1Ch): Supported 00:08:12.513 Doorbell Buffer Config (7Ch): Supported 00:08:12.513 Format NVM (80h): Supported LBA-Change 00:08:12.513 I/O Commands 00:08:12.513 ------------ 00:08:12.513 Flush (00h): Supported LBA-Change 00:08:12.513 Write (01h): Supported LBA-Change 00:08:12.513 Read (02h): Supported 00:08:12.513 Compare (05h): Supported 00:08:12.513 Write Zeroes (08h): Supported LBA-Change 00:08:12.513 Dataset Management (09h): Supported LBA-Change 00:08:12.513 Unknown (0Ch): Supported 00:08:12.513 Unknown (12h): Supported 00:08:12.513 Copy (19h): Supported LBA-Change 00:08:12.513 Unknown (1Dh): Supported LBA-Change 00:08:12.513 00:08:12.513 Error Log 00:08:12.513 ========= 00:08:12.513 00:08:12.513 Arbitration 00:08:12.513 =========== 00:08:12.513 Arbitration Burst: no limit 00:08:12.513 00:08:12.513 Power Management 00:08:12.513 ================ 00:08:12.513 Number of Power States: 1 00:08:12.513 Current Power State: Power State #0 00:08:12.513 Power State #0: 00:08:12.513 Max Power: 25.00 W 00:08:12.513 Non-Operational State: Operational 00:08:12.513 Entry Latency: 16 microseconds 00:08:12.513 Exit Latency: 4 microseconds 00:08:12.513 Relative Read Throughput: 0 00:08:12.513 Relative Read Latency: 0 00:08:12.513 Relative Write Throughput: 0 00:08:12.513 Relative Write Latency: 0 00:08:12.513 Idle Power: Not Reported 00:08:12.513 Active Power: Not Reported 00:08:12.513 Non-Operational Permissive Mode: Not Supported 00:08:12.513 00:08:12.513 Health Information 00:08:12.513 ================== 00:08:12.513 Critical Warnings: 00:08:12.513 Available Spare Space: OK 00:08:12.513 Temperature: OK 00:08:12.513 Device Reliability: OK 00:08:12.513 Read Only: No 00:08:12.513 Volatile Memory Backup: OK 00:08:12.513 Current Temperature: 323 Kelvin (50 Celsius) 00:08:12.513 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:12.513 Available Spare: 0% 00:08:12.513 Available Spare Threshold: 0% 00:08:12.513 Life Percentage Used: 0% 00:08:12.513 Data Units Read: 642 00:08:12.513 Data Units Written: 570 00:08:12.513 Host Read Commands: 34791 00:08:12.513 Host Write Commands: 34577 00:08:12.513 Controller Busy Time: 0 minutes 00:08:12.513 Power Cycles: 0 00:08:12.513 Power On Hours: 0 hours 00:08:12.513 Unsafe Shutdowns: 0 00:08:12.513 Unrecoverable Media Errors: 0 00:08:12.513 Lifetime Error Log Entries: 0 00:08:12.513 Warning Temperature Time: 0 minutes 00:08:12.513 Critical Temperature Time: 0 minutes 00:08:12.513 00:08:12.513 Number of Queues 00:08:12.513 ================ 00:08:12.513 Number of I/O Submission Queues: 64 00:08:12.513 Number of I/O Completion Queues: 64 00:08:12.513 00:08:12.513 ZNS Specific Controller Data 00:08:12.513 ============================ 00:08:12.513 Zone Append Size Limit: 0 00:08:12.513 00:08:12.513 00:08:12.513 Active Namespaces 00:08:12.513 ================= 00:08:12.513 Namespace ID:1 00:08:12.513 Error Recovery Timeout: Unlimited 00:08:12.513 Command Set Identifier: NVM (00h) 00:08:12.513 Deallocate: Supported 00:08:12.513 Deallocated/Unwritten Error: Supported 00:08:12.513 Deallocated Read Value: All 0x00 00:08:12.513 Deallocate in Write Zeroes: Not Supported 00:08:12.513 Deallocated Guard Field: 0xFFFF 00:08:12.513 Flush: Supported 00:08:12.513 Reservation: Not Supported 00:08:12.513 Metadata Transferred as: Separate Metadata Buffer 00:08:12.513 Namespace Sharing Capabilities: Private 00:08:12.513 Size (in LBAs): 1548666 (5GiB) 00:08:12.513 Capacity (in LBAs): 1548666 (5GiB) 00:08:12.513 Utilization (in LBAs): 1548666 (5GiB) 00:08:12.513 Thin Provisioning: Not Supported 00:08:12.513 Per-NS Atomic Units: No 00:08:12.513 Maximum Single Source Range Length: 128 00:08:12.513 Maximum Copy Length: 128 00:08:12.513 Maximum Source Range Count: 128 00:08:12.513 NGUID/EUI64 Never Reused: No 00:08:12.513 Namespace Write Protected: No 00:08:12.513 Number of LBA Formats: 8 00:08:12.513 Current LBA Format: LBA Format #07 00:08:12.513 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:12.513 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:12.513 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:12.513 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:12.513 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:12.513 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:12.513 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:12.513 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:12.513 00:08:12.513 NVM Specific Namespace Data 00:08:12.513 =========================== 00:08:12.513 Logical Block Storage Tag Mask: 0 00:08:12.513 Protection Information Capabilities: 00:08:12.513 16b Guard Protection Information Storage Tag Support: No 00:08:12.513 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:12.513 Storage Tag Check Read Support: No 00:08:12.513 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.513 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.513 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.513 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.513 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.513 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.513 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.513 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.513 21:40:35 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:12.513 21:40:35 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:12.774 ===================================================== 00:08:12.774 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:12.774 ===================================================== 00:08:12.774 Controller Capabilities/Features 00:08:12.774 ================================ 00:08:12.774 Vendor ID: 1b36 00:08:12.774 Subsystem Vendor ID: 1af4 00:08:12.774 Serial Number: 12341 00:08:12.774 Model Number: QEMU NVMe Ctrl 00:08:12.774 Firmware Version: 8.0.0 00:08:12.774 Recommended Arb Burst: 6 00:08:12.774 IEEE OUI Identifier: 00 54 52 00:08:12.774 Multi-path I/O 00:08:12.774 May have multiple subsystem ports: No 00:08:12.775 May have multiple controllers: No 00:08:12.775 Associated with SR-IOV VF: No 00:08:12.775 Max Data Transfer Size: 524288 00:08:12.775 Max Number of Namespaces: 256 00:08:12.775 Max Number of I/O Queues: 64 00:08:12.775 NVMe Specification Version (VS): 1.4 00:08:12.775 NVMe Specification Version (Identify): 1.4 00:08:12.775 Maximum Queue Entries: 2048 00:08:12.775 Contiguous Queues Required: Yes 00:08:12.775 Arbitration Mechanisms Supported 00:08:12.775 Weighted Round Robin: Not Supported 00:08:12.775 Vendor Specific: Not Supported 00:08:12.775 Reset Timeout: 7500 ms 00:08:12.775 Doorbell Stride: 4 bytes 00:08:12.775 NVM Subsystem Reset: Not Supported 00:08:12.775 Command Sets Supported 00:08:12.775 NVM Command Set: Supported 00:08:12.775 Boot Partition: Not Supported 00:08:12.775 Memory Page Size Minimum: 4096 bytes 00:08:12.775 Memory Page Size Maximum: 65536 bytes 00:08:12.775 Persistent Memory Region: Not Supported 00:08:12.775 Optional Asynchronous Events Supported 00:08:12.775 Namespace Attribute Notices: Supported 00:08:12.775 Firmware Activation Notices: Not Supported 00:08:12.775 ANA Change Notices: Not Supported 00:08:12.775 PLE Aggregate Log Change Notices: Not Supported 00:08:12.775 LBA Status Info Alert Notices: Not Supported 00:08:12.775 EGE Aggregate Log Change Notices: Not Supported 00:08:12.775 Normal NVM Subsystem Shutdown event: Not Supported 00:08:12.775 Zone Descriptor Change Notices: Not Supported 00:08:12.775 Discovery Log Change Notices: Not Supported 00:08:12.775 Controller Attributes 00:08:12.775 128-bit Host Identifier: Not Supported 00:08:12.775 Non-Operational Permissive Mode: Not Supported 00:08:12.775 NVM Sets: Not Supported 00:08:12.775 Read Recovery Levels: Not Supported 00:08:12.775 Endurance Groups: Not Supported 00:08:12.775 Predictable Latency Mode: Not Supported 00:08:12.775 Traffic Based Keep ALive: Not Supported 00:08:12.775 Namespace Granularity: Not Supported 00:08:12.775 SQ Associations: Not Supported 00:08:12.775 UUID List: Not Supported 00:08:12.775 Multi-Domain Subsystem: Not Supported 00:08:12.775 Fixed Capacity Management: Not Supported 00:08:12.775 Variable Capacity Management: Not Supported 00:08:12.775 Delete Endurance Group: Not Supported 00:08:12.775 Delete NVM Set: Not Supported 00:08:12.775 Extended LBA Formats Supported: Supported 00:08:12.775 Flexible Data Placement Supported: Not Supported 00:08:12.775 00:08:12.775 Controller Memory Buffer Support 00:08:12.775 ================================ 00:08:12.775 Supported: No 00:08:12.775 00:08:12.775 Persistent Memory Region Support 00:08:12.775 ================================ 00:08:12.775 Supported: No 00:08:12.775 00:08:12.775 Admin Command Set Attributes 00:08:12.775 ============================ 00:08:12.775 Security Send/Receive: Not Supported 00:08:12.775 Format NVM: Supported 00:08:12.775 Firmware Activate/Download: Not Supported 00:08:12.775 Namespace Management: Supported 00:08:12.775 Device Self-Test: Not Supported 00:08:12.775 Directives: Supported 00:08:12.775 NVMe-MI: Not Supported 00:08:12.775 Virtualization Management: Not Supported 00:08:12.775 Doorbell Buffer Config: Supported 00:08:12.775 Get LBA Status Capability: Not Supported 00:08:12.775 Command & Feature Lockdown Capability: Not Supported 00:08:12.775 Abort Command Limit: 4 00:08:12.775 Async Event Request Limit: 4 00:08:12.775 Number of Firmware Slots: N/A 00:08:12.775 Firmware Slot 1 Read-Only: N/A 00:08:12.775 Firmware Activation Without Reset: N/A 00:08:12.775 Multiple Update Detection Support: N/A 00:08:12.775 Firmware Update Granularity: No Information Provided 00:08:12.775 Per-Namespace SMART Log: Yes 00:08:12.775 Asymmetric Namespace Access Log Page: Not Supported 00:08:12.775 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:12.775 Command Effects Log Page: Supported 00:08:12.775 Get Log Page Extended Data: Supported 00:08:12.775 Telemetry Log Pages: Not Supported 00:08:12.775 Persistent Event Log Pages: Not Supported 00:08:12.775 Supported Log Pages Log Page: May Support 00:08:12.775 Commands Supported & Effects Log Page: Not Supported 00:08:12.775 Feature Identifiers & Effects Log Page:May Support 00:08:12.775 NVMe-MI Commands & Effects Log Page: May Support 00:08:12.775 Data Area 4 for Telemetry Log: Not Supported 00:08:12.775 Error Log Page Entries Supported: 1 00:08:12.775 Keep Alive: Not Supported 00:08:12.775 00:08:12.775 NVM Command Set Attributes 00:08:12.775 ========================== 00:08:12.775 Submission Queue Entry Size 00:08:12.775 Max: 64 00:08:12.775 Min: 64 00:08:12.775 Completion Queue Entry Size 00:08:12.775 Max: 16 00:08:12.775 Min: 16 00:08:12.775 Number of Namespaces: 256 00:08:12.775 Compare Command: Supported 00:08:12.775 Write Uncorrectable Command: Not Supported 00:08:12.775 Dataset Management Command: Supported 00:08:12.775 Write Zeroes Command: Supported 00:08:12.775 Set Features Save Field: Supported 00:08:12.775 Reservations: Not Supported 00:08:12.775 Timestamp: Supported 00:08:12.775 Copy: Supported 00:08:12.775 Volatile Write Cache: Present 00:08:12.775 Atomic Write Unit (Normal): 1 00:08:12.775 Atomic Write Unit (PFail): 1 00:08:12.775 Atomic Compare & Write Unit: 1 00:08:12.775 Fused Compare & Write: Not Supported 00:08:12.775 Scatter-Gather List 00:08:12.775 SGL Command Set: Supported 00:08:12.775 SGL Keyed: Not Supported 00:08:12.775 SGL Bit Bucket Descriptor: Not Supported 00:08:12.775 SGL Metadata Pointer: Not Supported 00:08:12.775 Oversized SGL: Not Supported 00:08:12.775 SGL Metadata Address: Not Supported 00:08:12.775 SGL Offset: Not Supported 00:08:12.775 Transport SGL Data Block: Not Supported 00:08:12.775 Replay Protected Memory Block: Not Supported 00:08:12.775 00:08:12.775 Firmware Slot Information 00:08:12.775 ========================= 00:08:12.775 Active slot: 1 00:08:12.775 Slot 1 Firmware Revision: 1.0 00:08:12.775 00:08:12.775 00:08:12.775 Commands Supported and Effects 00:08:12.775 ============================== 00:08:12.775 Admin Commands 00:08:12.775 -------------- 00:08:12.775 Delete I/O Submission Queue (00h): Supported 00:08:12.775 Create I/O Submission Queue (01h): Supported 00:08:12.775 Get Log Page (02h): Supported 00:08:12.775 Delete I/O Completion Queue (04h): Supported 00:08:12.775 Create I/O Completion Queue (05h): Supported 00:08:12.775 Identify (06h): Supported 00:08:12.775 Abort (08h): Supported 00:08:12.775 Set Features (09h): Supported 00:08:12.775 Get Features (0Ah): Supported 00:08:12.775 Asynchronous Event Request (0Ch): Supported 00:08:12.775 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:12.775 Directive Send (19h): Supported 00:08:12.775 Directive Receive (1Ah): Supported 00:08:12.775 Virtualization Management (1Ch): Supported 00:08:12.775 Doorbell Buffer Config (7Ch): Supported 00:08:12.775 Format NVM (80h): Supported LBA-Change 00:08:12.775 I/O Commands 00:08:12.775 ------------ 00:08:12.775 Flush (00h): Supported LBA-Change 00:08:12.775 Write (01h): Supported LBA-Change 00:08:12.775 Read (02h): Supported 00:08:12.775 Compare (05h): Supported 00:08:12.775 Write Zeroes (08h): Supported LBA-Change 00:08:12.775 Dataset Management (09h): Supported LBA-Change 00:08:12.775 Unknown (0Ch): Supported 00:08:12.775 Unknown (12h): Supported 00:08:12.775 Copy (19h): Supported LBA-Change 00:08:12.775 Unknown (1Dh): Supported LBA-Change 00:08:12.775 00:08:12.775 Error Log 00:08:12.775 ========= 00:08:12.775 00:08:12.775 Arbitration 00:08:12.775 =========== 00:08:12.775 Arbitration Burst: no limit 00:08:12.775 00:08:12.775 Power Management 00:08:12.775 ================ 00:08:12.775 Number of Power States: 1 00:08:12.775 Current Power State: Power State #0 00:08:12.775 Power State #0: 00:08:12.775 Max Power: 25.00 W 00:08:12.775 Non-Operational State: Operational 00:08:12.775 Entry Latency: 16 microseconds 00:08:12.775 Exit Latency: 4 microseconds 00:08:12.775 Relative Read Throughput: 0 00:08:12.775 Relative Read Latency: 0 00:08:12.775 Relative Write Throughput: 0 00:08:12.775 Relative Write Latency: 0 00:08:12.775 Idle Power: Not Reported 00:08:12.775 Active Power: Not Reported 00:08:12.775 Non-Operational Permissive Mode: Not Supported 00:08:12.775 00:08:12.775 Health Information 00:08:12.775 ================== 00:08:12.775 Critical Warnings: 00:08:12.775 Available Spare Space: OK 00:08:12.775 Temperature: OK 00:08:12.775 Device Reliability: OK 00:08:12.775 Read Only: No 00:08:12.775 Volatile Memory Backup: OK 00:08:12.775 Current Temperature: 323 Kelvin (50 Celsius) 00:08:12.776 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:12.776 Available Spare: 0% 00:08:12.776 Available Spare Threshold: 0% 00:08:12.776 Life Percentage Used: 0% 00:08:12.776 Data Units Read: 986 00:08:12.776 Data Units Written: 852 00:08:12.776 Host Read Commands: 51564 00:08:12.776 Host Write Commands: 50350 00:08:12.776 Controller Busy Time: 0 minutes 00:08:12.776 Power Cycles: 0 00:08:12.776 Power On Hours: 0 hours 00:08:12.776 Unsafe Shutdowns: 0 00:08:12.776 Unrecoverable Media Errors: 0 00:08:12.776 Lifetime Error Log Entries: 0 00:08:12.776 Warning Temperature Time: 0 minutes 00:08:12.776 Critical Temperature Time: 0 minutes 00:08:12.776 00:08:12.776 Number of Queues 00:08:12.776 ================ 00:08:12.776 Number of I/O Submission Queues: 64 00:08:12.776 Number of I/O Completion Queues: 64 00:08:12.776 00:08:12.776 ZNS Specific Controller Data 00:08:12.776 ============================ 00:08:12.776 Zone Append Size Limit: 0 00:08:12.776 00:08:12.776 00:08:12.776 Active Namespaces 00:08:12.776 ================= 00:08:12.776 Namespace ID:1 00:08:12.776 Error Recovery Timeout: Unlimited 00:08:12.776 Command Set Identifier: NVM (00h) 00:08:12.776 Deallocate: Supported 00:08:12.776 Deallocated/Unwritten Error: Supported 00:08:12.776 Deallocated Read Value: All 0x00 00:08:12.776 Deallocate in Write Zeroes: Not Supported 00:08:12.776 Deallocated Guard Field: 0xFFFF 00:08:12.776 Flush: Supported 00:08:12.776 Reservation: Not Supported 00:08:12.776 Namespace Sharing Capabilities: Private 00:08:12.776 Size (in LBAs): 1310720 (5GiB) 00:08:12.776 Capacity (in LBAs): 1310720 (5GiB) 00:08:12.776 Utilization (in LBAs): 1310720 (5GiB) 00:08:12.776 Thin Provisioning: Not Supported 00:08:12.776 Per-NS Atomic Units: No 00:08:12.776 Maximum Single Source Range Length: 128 00:08:12.776 Maximum Copy Length: 128 00:08:12.776 Maximum Source Range Count: 128 00:08:12.776 NGUID/EUI64 Never Reused: No 00:08:12.776 Namespace Write Protected: No 00:08:12.776 Number of LBA Formats: 8 00:08:12.776 Current LBA Format: LBA Format #04 00:08:12.776 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:12.776 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:12.776 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:12.776 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:12.776 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:12.776 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:12.776 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:12.776 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:12.776 00:08:12.776 NVM Specific Namespace Data 00:08:12.776 =========================== 00:08:12.776 Logical Block Storage Tag Mask: 0 00:08:12.776 Protection Information Capabilities: 00:08:12.776 16b Guard Protection Information Storage Tag Support: No 00:08:12.776 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:12.776 Storage Tag Check Read Support: No 00:08:12.776 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.776 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.776 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.776 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.776 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.776 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.776 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.776 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.776 21:40:35 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:12.776 21:40:35 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:12.776 ===================================================== 00:08:12.776 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:12.776 ===================================================== 00:08:12.776 Controller Capabilities/Features 00:08:12.776 ================================ 00:08:12.776 Vendor ID: 1b36 00:08:12.776 Subsystem Vendor ID: 1af4 00:08:12.776 Serial Number: 12342 00:08:12.776 Model Number: QEMU NVMe Ctrl 00:08:12.776 Firmware Version: 8.0.0 00:08:12.776 Recommended Arb Burst: 6 00:08:12.776 IEEE OUI Identifier: 00 54 52 00:08:12.776 Multi-path I/O 00:08:12.776 May have multiple subsystem ports: No 00:08:12.776 May have multiple controllers: No 00:08:12.776 Associated with SR-IOV VF: No 00:08:12.776 Max Data Transfer Size: 524288 00:08:12.776 Max Number of Namespaces: 256 00:08:12.776 Max Number of I/O Queues: 64 00:08:12.776 NVMe Specification Version (VS): 1.4 00:08:12.776 NVMe Specification Version (Identify): 1.4 00:08:12.776 Maximum Queue Entries: 2048 00:08:12.776 Contiguous Queues Required: Yes 00:08:12.776 Arbitration Mechanisms Supported 00:08:12.776 Weighted Round Robin: Not Supported 00:08:12.776 Vendor Specific: Not Supported 00:08:12.776 Reset Timeout: 7500 ms 00:08:12.776 Doorbell Stride: 4 bytes 00:08:12.776 NVM Subsystem Reset: Not Supported 00:08:12.776 Command Sets Supported 00:08:12.776 NVM Command Set: Supported 00:08:12.776 Boot Partition: Not Supported 00:08:12.776 Memory Page Size Minimum: 4096 bytes 00:08:12.776 Memory Page Size Maximum: 65536 bytes 00:08:12.776 Persistent Memory Region: Not Supported 00:08:12.776 Optional Asynchronous Events Supported 00:08:12.776 Namespace Attribute Notices: Supported 00:08:12.776 Firmware Activation Notices: Not Supported 00:08:12.776 ANA Change Notices: Not Supported 00:08:12.776 PLE Aggregate Log Change Notices: Not Supported 00:08:12.776 LBA Status Info Alert Notices: Not Supported 00:08:12.776 EGE Aggregate Log Change Notices: Not Supported 00:08:12.776 Normal NVM Subsystem Shutdown event: Not Supported 00:08:12.776 Zone Descriptor Change Notices: Not Supported 00:08:12.776 Discovery Log Change Notices: Not Supported 00:08:12.776 Controller Attributes 00:08:12.776 128-bit Host Identifier: Not Supported 00:08:12.776 Non-Operational Permissive Mode: Not Supported 00:08:12.776 NVM Sets: Not Supported 00:08:12.776 Read Recovery Levels: Not Supported 00:08:12.776 Endurance Groups: Not Supported 00:08:12.776 Predictable Latency Mode: Not Supported 00:08:12.776 Traffic Based Keep ALive: Not Supported 00:08:12.776 Namespace Granularity: Not Supported 00:08:12.776 SQ Associations: Not Supported 00:08:12.776 UUID List: Not Supported 00:08:12.776 Multi-Domain Subsystem: Not Supported 00:08:12.776 Fixed Capacity Management: Not Supported 00:08:12.776 Variable Capacity Management: Not Supported 00:08:12.776 Delete Endurance Group: Not Supported 00:08:12.776 Delete NVM Set: Not Supported 00:08:12.776 Extended LBA Formats Supported: Supported 00:08:12.776 Flexible Data Placement Supported: Not Supported 00:08:12.776 00:08:12.776 Controller Memory Buffer Support 00:08:12.776 ================================ 00:08:12.776 Supported: No 00:08:12.776 00:08:12.776 Persistent Memory Region Support 00:08:12.776 ================================ 00:08:12.776 Supported: No 00:08:12.776 00:08:12.776 Admin Command Set Attributes 00:08:12.776 ============================ 00:08:12.776 Security Send/Receive: Not Supported 00:08:12.776 Format NVM: Supported 00:08:12.776 Firmware Activate/Download: Not Supported 00:08:12.776 Namespace Management: Supported 00:08:12.776 Device Self-Test: Not Supported 00:08:12.776 Directives: Supported 00:08:12.776 NVMe-MI: Not Supported 00:08:12.776 Virtualization Management: Not Supported 00:08:12.776 Doorbell Buffer Config: Supported 00:08:12.776 Get LBA Status Capability: Not Supported 00:08:12.776 Command & Feature Lockdown Capability: Not Supported 00:08:12.776 Abort Command Limit: 4 00:08:12.776 Async Event Request Limit: 4 00:08:12.776 Number of Firmware Slots: N/A 00:08:12.776 Firmware Slot 1 Read-Only: N/A 00:08:12.776 Firmware Activation Without Reset: N/A 00:08:12.776 Multiple Update Detection Support: N/A 00:08:12.776 Firmware Update Granularity: No Information Provided 00:08:12.776 Per-Namespace SMART Log: Yes 00:08:12.776 Asymmetric Namespace Access Log Page: Not Supported 00:08:12.776 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:12.776 Command Effects Log Page: Supported 00:08:12.776 Get Log Page Extended Data: Supported 00:08:12.776 Telemetry Log Pages: Not Supported 00:08:12.776 Persistent Event Log Pages: Not Supported 00:08:12.776 Supported Log Pages Log Page: May Support 00:08:12.776 Commands Supported & Effects Log Page: Not Supported 00:08:12.776 Feature Identifiers & Effects Log Page:May Support 00:08:12.776 NVMe-MI Commands & Effects Log Page: May Support 00:08:12.776 Data Area 4 for Telemetry Log: Not Supported 00:08:12.776 Error Log Page Entries Supported: 1 00:08:12.776 Keep Alive: Not Supported 00:08:12.776 00:08:12.776 NVM Command Set Attributes 00:08:12.776 ========================== 00:08:12.777 Submission Queue Entry Size 00:08:12.777 Max: 64 00:08:12.777 Min: 64 00:08:12.777 Completion Queue Entry Size 00:08:12.777 Max: 16 00:08:12.777 Min: 16 00:08:12.777 Number of Namespaces: 256 00:08:12.777 Compare Command: Supported 00:08:12.777 Write Uncorrectable Command: Not Supported 00:08:12.777 Dataset Management Command: Supported 00:08:12.777 Write Zeroes Command: Supported 00:08:12.777 Set Features Save Field: Supported 00:08:12.777 Reservations: Not Supported 00:08:12.777 Timestamp: Supported 00:08:12.777 Copy: Supported 00:08:12.777 Volatile Write Cache: Present 00:08:12.777 Atomic Write Unit (Normal): 1 00:08:12.777 Atomic Write Unit (PFail): 1 00:08:12.777 Atomic Compare & Write Unit: 1 00:08:12.777 Fused Compare & Write: Not Supported 00:08:12.777 Scatter-Gather List 00:08:12.777 SGL Command Set: Supported 00:08:12.777 SGL Keyed: Not Supported 00:08:12.777 SGL Bit Bucket Descriptor: Not Supported 00:08:12.777 SGL Metadata Pointer: Not Supported 00:08:12.777 Oversized SGL: Not Supported 00:08:12.777 SGL Metadata Address: Not Supported 00:08:12.777 SGL Offset: Not Supported 00:08:12.777 Transport SGL Data Block: Not Supported 00:08:12.777 Replay Protected Memory Block: Not Supported 00:08:12.777 00:08:12.777 Firmware Slot Information 00:08:12.777 ========================= 00:08:12.777 Active slot: 1 00:08:12.777 Slot 1 Firmware Revision: 1.0 00:08:12.777 00:08:12.777 00:08:12.777 Commands Supported and Effects 00:08:12.777 ============================== 00:08:12.777 Admin Commands 00:08:12.777 -------------- 00:08:12.777 Delete I/O Submission Queue (00h): Supported 00:08:12.777 Create I/O Submission Queue (01h): Supported 00:08:12.777 Get Log Page (02h): Supported 00:08:12.777 Delete I/O Completion Queue (04h): Supported 00:08:12.777 Create I/O Completion Queue (05h): Supported 00:08:12.777 Identify (06h): Supported 00:08:12.777 Abort (08h): Supported 00:08:12.777 Set Features (09h): Supported 00:08:12.777 Get Features (0Ah): Supported 00:08:12.777 Asynchronous Event Request (0Ch): Supported 00:08:12.777 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:12.777 Directive Send (19h): Supported 00:08:12.777 Directive Receive (1Ah): Supported 00:08:12.777 Virtualization Management (1Ch): Supported 00:08:12.777 Doorbell Buffer Config (7Ch): Supported 00:08:12.777 Format NVM (80h): Supported LBA-Change 00:08:12.777 I/O Commands 00:08:12.777 ------------ 00:08:12.777 Flush (00h): Supported LBA-Change 00:08:12.777 Write (01h): Supported LBA-Change 00:08:12.777 Read (02h): Supported 00:08:12.777 Compare (05h): Supported 00:08:12.777 Write Zeroes (08h): Supported LBA-Change 00:08:12.777 Dataset Management (09h): Supported LBA-Change 00:08:12.777 Unknown (0Ch): Supported 00:08:12.777 Unknown (12h): Supported 00:08:12.777 Copy (19h): Supported LBA-Change 00:08:12.777 Unknown (1Dh): Supported LBA-Change 00:08:12.777 00:08:12.777 Error Log 00:08:12.777 ========= 00:08:12.777 00:08:12.777 Arbitration 00:08:12.777 =========== 00:08:12.777 Arbitration Burst: no limit 00:08:12.777 00:08:12.777 Power Management 00:08:12.777 ================ 00:08:12.777 Number of Power States: 1 00:08:12.777 Current Power State: Power State #0 00:08:12.777 Power State #0: 00:08:12.777 Max Power: 25.00 W 00:08:12.777 Non-Operational State: Operational 00:08:12.777 Entry Latency: 16 microseconds 00:08:12.777 Exit Latency: 4 microseconds 00:08:12.777 Relative Read Throughput: 0 00:08:12.777 Relative Read Latency: 0 00:08:12.777 Relative Write Throughput: 0 00:08:12.777 Relative Write Latency: 0 00:08:12.777 Idle Power: Not Reported 00:08:12.777 Active Power: Not Reported 00:08:12.777 Non-Operational Permissive Mode: Not Supported 00:08:12.777 00:08:12.777 Health Information 00:08:12.777 ================== 00:08:12.777 Critical Warnings: 00:08:12.777 Available Spare Space: OK 00:08:12.777 Temperature: OK 00:08:12.777 Device Reliability: OK 00:08:12.777 Read Only: No 00:08:12.777 Volatile Memory Backup: OK 00:08:12.777 Current Temperature: 323 Kelvin (50 Celsius) 00:08:12.777 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:12.777 Available Spare: 0% 00:08:12.777 Available Spare Threshold: 0% 00:08:12.777 Life Percentage Used: 0% 00:08:12.777 Data Units Read: 2082 00:08:12.777 Data Units Written: 1869 00:08:12.777 Host Read Commands: 106267 00:08:12.777 Host Write Commands: 104536 00:08:12.777 Controller Busy Time: 0 minutes 00:08:12.777 Power Cycles: 0 00:08:12.777 Power On Hours: 0 hours 00:08:12.777 Unsafe Shutdowns: 0 00:08:12.777 Unrecoverable Media Errors: 0 00:08:12.777 Lifetime Error Log Entries: 0 00:08:12.777 Warning Temperature Time: 0 minutes 00:08:12.777 Critical Temperature Time: 0 minutes 00:08:12.777 00:08:12.777 Number of Queues 00:08:12.777 ================ 00:08:12.777 Number of I/O Submission Queues: 64 00:08:12.777 Number of I/O Completion Queues: 64 00:08:12.777 00:08:12.777 ZNS Specific Controller Data 00:08:12.777 ============================ 00:08:12.777 Zone Append Size Limit: 0 00:08:12.777 00:08:12.777 00:08:12.777 Active Namespaces 00:08:12.777 ================= 00:08:12.777 Namespace ID:1 00:08:12.777 Error Recovery Timeout: Unlimited 00:08:12.777 Command Set Identifier: NVM (00h) 00:08:12.777 Deallocate: Supported 00:08:12.777 Deallocated/Unwritten Error: Supported 00:08:12.777 Deallocated Read Value: All 0x00 00:08:12.777 Deallocate in Write Zeroes: Not Supported 00:08:12.777 Deallocated Guard Field: 0xFFFF 00:08:12.777 Flush: Supported 00:08:12.777 Reservation: Not Supported 00:08:12.777 Namespace Sharing Capabilities: Private 00:08:12.777 Size (in LBAs): 1048576 (4GiB) 00:08:12.777 Capacity (in LBAs): 1048576 (4GiB) 00:08:12.777 Utilization (in LBAs): 1048576 (4GiB) 00:08:12.777 Thin Provisioning: Not Supported 00:08:12.777 Per-NS Atomic Units: No 00:08:12.777 Maximum Single Source Range Length: 128 00:08:12.777 Maximum Copy Length: 128 00:08:12.777 Maximum Source Range Count: 128 00:08:12.777 NGUID/EUI64 Never Reused: No 00:08:12.777 Namespace Write Protected: No 00:08:12.777 Number of LBA Formats: 8 00:08:12.777 Current LBA Format: LBA Format #04 00:08:12.777 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:12.777 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:12.777 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:12.777 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:12.777 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:12.777 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:12.777 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:12.777 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:12.777 00:08:12.777 NVM Specific Namespace Data 00:08:12.777 =========================== 00:08:12.777 Logical Block Storage Tag Mask: 0 00:08:12.777 Protection Information Capabilities: 00:08:12.777 16b Guard Protection Information Storage Tag Support: No 00:08:12.777 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:12.777 Storage Tag Check Read Support: No 00:08:12.777 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.777 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.777 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.777 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.777 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.777 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.777 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.777 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.777 Namespace ID:2 00:08:12.777 Error Recovery Timeout: Unlimited 00:08:12.777 Command Set Identifier: NVM (00h) 00:08:12.777 Deallocate: Supported 00:08:12.777 Deallocated/Unwritten Error: Supported 00:08:12.777 Deallocated Read Value: All 0x00 00:08:12.777 Deallocate in Write Zeroes: Not Supported 00:08:12.777 Deallocated Guard Field: 0xFFFF 00:08:12.777 Flush: Supported 00:08:12.777 Reservation: Not Supported 00:08:12.777 Namespace Sharing Capabilities: Private 00:08:12.777 Size (in LBAs): 1048576 (4GiB) 00:08:12.777 Capacity (in LBAs): 1048576 (4GiB) 00:08:12.777 Utilization (in LBAs): 1048576 (4GiB) 00:08:12.777 Thin Provisioning: Not Supported 00:08:12.777 Per-NS Atomic Units: No 00:08:12.777 Maximum Single Source Range Length: 128 00:08:12.777 Maximum Copy Length: 128 00:08:12.777 Maximum Source Range Count: 128 00:08:12.777 NGUID/EUI64 Never Reused: No 00:08:12.777 Namespace Write Protected: No 00:08:12.777 Number of LBA Formats: 8 00:08:12.777 Current LBA Format: LBA Format #04 00:08:12.778 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:12.778 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:12.778 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:12.778 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:12.778 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:12.778 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:12.778 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:12.778 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:12.778 00:08:12.778 NVM Specific Namespace Data 00:08:12.778 =========================== 00:08:12.778 Logical Block Storage Tag Mask: 0 00:08:12.778 Protection Information Capabilities: 00:08:12.778 16b Guard Protection Information Storage Tag Support: No 00:08:12.778 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:12.778 Storage Tag Check Read Support: No 00:08:12.778 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.778 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.778 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.778 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.778 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.778 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.778 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.778 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.778 Namespace ID:3 00:08:12.778 Error Recovery Timeout: Unlimited 00:08:12.778 Command Set Identifier: NVM (00h) 00:08:12.778 Deallocate: Supported 00:08:12.778 Deallocated/Unwritten Error: Supported 00:08:12.778 Deallocated Read Value: All 0x00 00:08:12.778 Deallocate in Write Zeroes: Not Supported 00:08:12.778 Deallocated Guard Field: 0xFFFF 00:08:12.778 Flush: Supported 00:08:12.778 Reservation: Not Supported 00:08:12.778 Namespace Sharing Capabilities: Private 00:08:12.778 Size (in LBAs): 1048576 (4GiB) 00:08:12.778 Capacity (in LBAs): 1048576 (4GiB) 00:08:12.778 Utilization (in LBAs): 1048576 (4GiB) 00:08:12.778 Thin Provisioning: Not Supported 00:08:12.778 Per-NS Atomic Units: No 00:08:12.778 Maximum Single Source Range Length: 128 00:08:12.778 Maximum Copy Length: 128 00:08:12.778 Maximum Source Range Count: 128 00:08:12.778 NGUID/EUI64 Never Reused: No 00:08:12.778 Namespace Write Protected: No 00:08:12.778 Number of LBA Formats: 8 00:08:12.778 Current LBA Format: LBA Format #04 00:08:12.778 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:12.778 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:12.778 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:12.778 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:12.778 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:12.778 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:12.778 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:12.778 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:12.778 00:08:12.778 NVM Specific Namespace Data 00:08:12.778 =========================== 00:08:12.778 Logical Block Storage Tag Mask: 0 00:08:12.778 Protection Information Capabilities: 00:08:12.778 16b Guard Protection Information Storage Tag Support: No 00:08:12.778 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:12.778 Storage Tag Check Read Support: No 00:08:12.778 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.778 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.778 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.778 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.778 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.778 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.778 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.778 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:12.778 21:40:35 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:12.778 21:40:35 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:13.040 ===================================================== 00:08:13.040 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:13.040 ===================================================== 00:08:13.040 Controller Capabilities/Features 00:08:13.040 ================================ 00:08:13.040 Vendor ID: 1b36 00:08:13.040 Subsystem Vendor ID: 1af4 00:08:13.040 Serial Number: 12343 00:08:13.040 Model Number: QEMU NVMe Ctrl 00:08:13.040 Firmware Version: 8.0.0 00:08:13.040 Recommended Arb Burst: 6 00:08:13.040 IEEE OUI Identifier: 00 54 52 00:08:13.040 Multi-path I/O 00:08:13.040 May have multiple subsystem ports: No 00:08:13.040 May have multiple controllers: Yes 00:08:13.040 Associated with SR-IOV VF: No 00:08:13.040 Max Data Transfer Size: 524288 00:08:13.040 Max Number of Namespaces: 256 00:08:13.040 Max Number of I/O Queues: 64 00:08:13.040 NVMe Specification Version (VS): 1.4 00:08:13.040 NVMe Specification Version (Identify): 1.4 00:08:13.040 Maximum Queue Entries: 2048 00:08:13.040 Contiguous Queues Required: Yes 00:08:13.040 Arbitration Mechanisms Supported 00:08:13.040 Weighted Round Robin: Not Supported 00:08:13.040 Vendor Specific: Not Supported 00:08:13.040 Reset Timeout: 7500 ms 00:08:13.040 Doorbell Stride: 4 bytes 00:08:13.040 NVM Subsystem Reset: Not Supported 00:08:13.040 Command Sets Supported 00:08:13.040 NVM Command Set: Supported 00:08:13.040 Boot Partition: Not Supported 00:08:13.040 Memory Page Size Minimum: 4096 bytes 00:08:13.040 Memory Page Size Maximum: 65536 bytes 00:08:13.040 Persistent Memory Region: Not Supported 00:08:13.040 Optional Asynchronous Events Supported 00:08:13.040 Namespace Attribute Notices: Supported 00:08:13.040 Firmware Activation Notices: Not Supported 00:08:13.040 ANA Change Notices: Not Supported 00:08:13.040 PLE Aggregate Log Change Notices: Not Supported 00:08:13.040 LBA Status Info Alert Notices: Not Supported 00:08:13.040 EGE Aggregate Log Change Notices: Not Supported 00:08:13.040 Normal NVM Subsystem Shutdown event: Not Supported 00:08:13.040 Zone Descriptor Change Notices: Not Supported 00:08:13.040 Discovery Log Change Notices: Not Supported 00:08:13.040 Controller Attributes 00:08:13.040 128-bit Host Identifier: Not Supported 00:08:13.040 Non-Operational Permissive Mode: Not Supported 00:08:13.040 NVM Sets: Not Supported 00:08:13.040 Read Recovery Levels: Not Supported 00:08:13.040 Endurance Groups: Supported 00:08:13.040 Predictable Latency Mode: Not Supported 00:08:13.040 Traffic Based Keep ALive: Not Supported 00:08:13.040 Namespace Granularity: Not Supported 00:08:13.040 SQ Associations: Not Supported 00:08:13.040 UUID List: Not Supported 00:08:13.040 Multi-Domain Subsystem: Not Supported 00:08:13.040 Fixed Capacity Management: Not Supported 00:08:13.040 Variable Capacity Management: Not Supported 00:08:13.040 Delete Endurance Group: Not Supported 00:08:13.040 Delete NVM Set: Not Supported 00:08:13.040 Extended LBA Formats Supported: Supported 00:08:13.040 Flexible Data Placement Supported: Supported 00:08:13.040 00:08:13.040 Controller Memory Buffer Support 00:08:13.040 ================================ 00:08:13.040 Supported: No 00:08:13.040 00:08:13.040 Persistent Memory Region Support 00:08:13.040 ================================ 00:08:13.040 Supported: No 00:08:13.040 00:08:13.040 Admin Command Set Attributes 00:08:13.040 ============================ 00:08:13.040 Security Send/Receive: Not Supported 00:08:13.040 Format NVM: Supported 00:08:13.040 Firmware Activate/Download: Not Supported 00:08:13.040 Namespace Management: Supported 00:08:13.040 Device Self-Test: Not Supported 00:08:13.040 Directives: Supported 00:08:13.040 NVMe-MI: Not Supported 00:08:13.040 Virtualization Management: Not Supported 00:08:13.040 Doorbell Buffer Config: Supported 00:08:13.040 Get LBA Status Capability: Not Supported 00:08:13.040 Command & Feature Lockdown Capability: Not Supported 00:08:13.040 Abort Command Limit: 4 00:08:13.040 Async Event Request Limit: 4 00:08:13.040 Number of Firmware Slots: N/A 00:08:13.040 Firmware Slot 1 Read-Only: N/A 00:08:13.040 Firmware Activation Without Reset: N/A 00:08:13.040 Multiple Update Detection Support: N/A 00:08:13.040 Firmware Update Granularity: No Information Provided 00:08:13.040 Per-Namespace SMART Log: Yes 00:08:13.040 Asymmetric Namespace Access Log Page: Not Supported 00:08:13.040 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:13.040 Command Effects Log Page: Supported 00:08:13.040 Get Log Page Extended Data: Supported 00:08:13.040 Telemetry Log Pages: Not Supported 00:08:13.040 Persistent Event Log Pages: Not Supported 00:08:13.040 Supported Log Pages Log Page: May Support 00:08:13.040 Commands Supported & Effects Log Page: Not Supported 00:08:13.040 Feature Identifiers & Effects Log Page:May Support 00:08:13.040 NVMe-MI Commands & Effects Log Page: May Support 00:08:13.040 Data Area 4 for Telemetry Log: Not Supported 00:08:13.040 Error Log Page Entries Supported: 1 00:08:13.040 Keep Alive: Not Supported 00:08:13.040 00:08:13.040 NVM Command Set Attributes 00:08:13.040 ========================== 00:08:13.040 Submission Queue Entry Size 00:08:13.040 Max: 64 00:08:13.040 Min: 64 00:08:13.040 Completion Queue Entry Size 00:08:13.040 Max: 16 00:08:13.040 Min: 16 00:08:13.040 Number of Namespaces: 256 00:08:13.040 Compare Command: Supported 00:08:13.040 Write Uncorrectable Command: Not Supported 00:08:13.040 Dataset Management Command: Supported 00:08:13.040 Write Zeroes Command: Supported 00:08:13.040 Set Features Save Field: Supported 00:08:13.040 Reservations: Not Supported 00:08:13.040 Timestamp: Supported 00:08:13.040 Copy: Supported 00:08:13.040 Volatile Write Cache: Present 00:08:13.040 Atomic Write Unit (Normal): 1 00:08:13.040 Atomic Write Unit (PFail): 1 00:08:13.040 Atomic Compare & Write Unit: 1 00:08:13.040 Fused Compare & Write: Not Supported 00:08:13.040 Scatter-Gather List 00:08:13.040 SGL Command Set: Supported 00:08:13.040 SGL Keyed: Not Supported 00:08:13.040 SGL Bit Bucket Descriptor: Not Supported 00:08:13.040 SGL Metadata Pointer: Not Supported 00:08:13.040 Oversized SGL: Not Supported 00:08:13.040 SGL Metadata Address: Not Supported 00:08:13.040 SGL Offset: Not Supported 00:08:13.040 Transport SGL Data Block: Not Supported 00:08:13.040 Replay Protected Memory Block: Not Supported 00:08:13.040 00:08:13.040 Firmware Slot Information 00:08:13.040 ========================= 00:08:13.040 Active slot: 1 00:08:13.040 Slot 1 Firmware Revision: 1.0 00:08:13.040 00:08:13.040 00:08:13.040 Commands Supported and Effects 00:08:13.040 ============================== 00:08:13.040 Admin Commands 00:08:13.040 -------------- 00:08:13.040 Delete I/O Submission Queue (00h): Supported 00:08:13.040 Create I/O Submission Queue (01h): Supported 00:08:13.040 Get Log Page (02h): Supported 00:08:13.040 Delete I/O Completion Queue (04h): Supported 00:08:13.040 Create I/O Completion Queue (05h): Supported 00:08:13.040 Identify (06h): Supported 00:08:13.040 Abort (08h): Supported 00:08:13.041 Set Features (09h): Supported 00:08:13.041 Get Features (0Ah): Supported 00:08:13.041 Asynchronous Event Request (0Ch): Supported 00:08:13.041 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:13.041 Directive Send (19h): Supported 00:08:13.041 Directive Receive (1Ah): Supported 00:08:13.041 Virtualization Management (1Ch): Supported 00:08:13.041 Doorbell Buffer Config (7Ch): Supported 00:08:13.041 Format NVM (80h): Supported LBA-Change 00:08:13.041 I/O Commands 00:08:13.041 ------------ 00:08:13.041 Flush (00h): Supported LBA-Change 00:08:13.041 Write (01h): Supported LBA-Change 00:08:13.041 Read (02h): Supported 00:08:13.041 Compare (05h): Supported 00:08:13.041 Write Zeroes (08h): Supported LBA-Change 00:08:13.041 Dataset Management (09h): Supported LBA-Change 00:08:13.041 Unknown (0Ch): Supported 00:08:13.041 Unknown (12h): Supported 00:08:13.041 Copy (19h): Supported LBA-Change 00:08:13.041 Unknown (1Dh): Supported LBA-Change 00:08:13.041 00:08:13.041 Error Log 00:08:13.041 ========= 00:08:13.041 00:08:13.041 Arbitration 00:08:13.041 =========== 00:08:13.041 Arbitration Burst: no limit 00:08:13.041 00:08:13.041 Power Management 00:08:13.041 ================ 00:08:13.041 Number of Power States: 1 00:08:13.041 Current Power State: Power State #0 00:08:13.041 Power State #0: 00:08:13.041 Max Power: 25.00 W 00:08:13.041 Non-Operational State: Operational 00:08:13.041 Entry Latency: 16 microseconds 00:08:13.041 Exit Latency: 4 microseconds 00:08:13.041 Relative Read Throughput: 0 00:08:13.041 Relative Read Latency: 0 00:08:13.041 Relative Write Throughput: 0 00:08:13.041 Relative Write Latency: 0 00:08:13.041 Idle Power: Not Reported 00:08:13.041 Active Power: Not Reported 00:08:13.041 Non-Operational Permissive Mode: Not Supported 00:08:13.041 00:08:13.041 Health Information 00:08:13.041 ================== 00:08:13.041 Critical Warnings: 00:08:13.041 Available Spare Space: OK 00:08:13.041 Temperature: OK 00:08:13.041 Device Reliability: OK 00:08:13.041 Read Only: No 00:08:13.041 Volatile Memory Backup: OK 00:08:13.041 Current Temperature: 323 Kelvin (50 Celsius) 00:08:13.041 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:13.041 Available Spare: 0% 00:08:13.041 Available Spare Threshold: 0% 00:08:13.041 Life Percentage Used: 0% 00:08:13.041 Data Units Read: 871 00:08:13.041 Data Units Written: 800 00:08:13.041 Host Read Commands: 36942 00:08:13.041 Host Write Commands: 36365 00:08:13.041 Controller Busy Time: 0 minutes 00:08:13.041 Power Cycles: 0 00:08:13.041 Power On Hours: 0 hours 00:08:13.041 Unsafe Shutdowns: 0 00:08:13.041 Unrecoverable Media Errors: 0 00:08:13.041 Lifetime Error Log Entries: 0 00:08:13.041 Warning Temperature Time: 0 minutes 00:08:13.041 Critical Temperature Time: 0 minutes 00:08:13.041 00:08:13.041 Number of Queues 00:08:13.041 ================ 00:08:13.041 Number of I/O Submission Queues: 64 00:08:13.041 Number of I/O Completion Queues: 64 00:08:13.041 00:08:13.041 ZNS Specific Controller Data 00:08:13.041 ============================ 00:08:13.041 Zone Append Size Limit: 0 00:08:13.041 00:08:13.041 00:08:13.041 Active Namespaces 00:08:13.041 ================= 00:08:13.041 Namespace ID:1 00:08:13.041 Error Recovery Timeout: Unlimited 00:08:13.041 Command Set Identifier: NVM (00h) 00:08:13.041 Deallocate: Supported 00:08:13.041 Deallocated/Unwritten Error: Supported 00:08:13.041 Deallocated Read Value: All 0x00 00:08:13.041 Deallocate in Write Zeroes: Not Supported 00:08:13.041 Deallocated Guard Field: 0xFFFF 00:08:13.041 Flush: Supported 00:08:13.041 Reservation: Not Supported 00:08:13.041 Namespace Sharing Capabilities: Multiple Controllers 00:08:13.041 Size (in LBAs): 262144 (1GiB) 00:08:13.041 Capacity (in LBAs): 262144 (1GiB) 00:08:13.041 Utilization (in LBAs): 262144 (1GiB) 00:08:13.041 Thin Provisioning: Not Supported 00:08:13.041 Per-NS Atomic Units: No 00:08:13.041 Maximum Single Source Range Length: 128 00:08:13.041 Maximum Copy Length: 128 00:08:13.041 Maximum Source Range Count: 128 00:08:13.041 NGUID/EUI64 Never Reused: No 00:08:13.041 Namespace Write Protected: No 00:08:13.041 Endurance group ID: 1 00:08:13.041 Number of LBA Formats: 8 00:08:13.041 Current LBA Format: LBA Format #04 00:08:13.041 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:13.041 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:13.041 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:13.041 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:13.041 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:13.041 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:13.041 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:13.041 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:13.041 00:08:13.041 Get Feature FDP: 00:08:13.041 ================ 00:08:13.041 Enabled: Yes 00:08:13.041 FDP configuration index: 0 00:08:13.041 00:08:13.041 FDP configurations log page 00:08:13.041 =========================== 00:08:13.041 Number of FDP configurations: 1 00:08:13.041 Version: 0 00:08:13.041 Size: 112 00:08:13.041 FDP Configuration Descriptor: 0 00:08:13.041 Descriptor Size: 96 00:08:13.041 Reclaim Group Identifier format: 2 00:08:13.041 FDP Volatile Write Cache: Not Present 00:08:13.041 FDP Configuration: Valid 00:08:13.041 Vendor Specific Size: 0 00:08:13.041 Number of Reclaim Groups: 2 00:08:13.041 Number of Recalim Unit Handles: 8 00:08:13.041 Max Placement Identifiers: 128 00:08:13.041 Number of Namespaces Suppprted: 256 00:08:13.041 Reclaim unit Nominal Size: 6000000 bytes 00:08:13.041 Estimated Reclaim Unit Time Limit: Not Reported 00:08:13.041 RUH Desc #000: RUH Type: Initially Isolated 00:08:13.041 RUH Desc #001: RUH Type: Initially Isolated 00:08:13.041 RUH Desc #002: RUH Type: Initially Isolated 00:08:13.041 RUH Desc #003: RUH Type: Initially Isolated 00:08:13.041 RUH Desc #004: RUH Type: Initially Isolated 00:08:13.041 RUH Desc #005: RUH Type: Initially Isolated 00:08:13.041 RUH Desc #006: RUH Type: Initially Isolated 00:08:13.041 RUH Desc #007: RUH Type: Initially Isolated 00:08:13.041 00:08:13.041 FDP reclaim unit handle usage log page 00:08:13.041 ====================================== 00:08:13.041 Number of Reclaim Unit Handles: 8 00:08:13.041 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:13.041 RUH Usage Desc #001: RUH Attributes: Unused 00:08:13.041 RUH Usage Desc #002: RUH Attributes: Unused 00:08:13.041 RUH Usage Desc #003: RUH Attributes: Unused 00:08:13.041 RUH Usage Desc #004: RUH Attributes: Unused 00:08:13.041 RUH Usage Desc #005: RUH Attributes: Unused 00:08:13.041 RUH Usage Desc #006: RUH Attributes: Unused 00:08:13.041 RUH Usage Desc #007: RUH Attributes: Unused 00:08:13.041 00:08:13.041 FDP statistics log page 00:08:13.041 ======================= 00:08:13.041 Host bytes with metadata written: 516071424 00:08:13.041 Media bytes with metadata written: 516128768 00:08:13.041 Media bytes erased: 0 00:08:13.041 00:08:13.041 FDP events log page 00:08:13.041 =================== 00:08:13.041 Number of FDP events: 0 00:08:13.041 00:08:13.041 NVM Specific Namespace Data 00:08:13.041 =========================== 00:08:13.041 Logical Block Storage Tag Mask: 0 00:08:13.041 Protection Information Capabilities: 00:08:13.041 16b Guard Protection Information Storage Tag Support: No 00:08:13.041 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:13.041 Storage Tag Check Read Support: No 00:08:13.041 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:13.041 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:13.041 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:13.041 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:13.041 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:13.041 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:13.041 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:13.041 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:13.041 00:08:13.041 real 0m1.011s 00:08:13.041 user 0m0.402s 00:08:13.041 sys 0m0.416s 00:08:13.041 21:40:36 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:13.041 ************************************ 00:08:13.041 END TEST nvme_identify 00:08:13.041 21:40:36 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:13.041 ************************************ 00:08:13.041 21:40:36 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:13.041 21:40:36 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:13.041 21:40:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:13.041 21:40:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:13.041 ************************************ 00:08:13.041 START TEST nvme_perf 00:08:13.042 ************************************ 00:08:13.042 21:40:36 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:08:13.042 21:40:36 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:14.429 Initializing NVMe Controllers 00:08:14.429 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:14.429 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:14.429 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:14.429 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:14.429 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:14.429 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:14.429 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:14.429 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:14.429 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:14.429 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:14.429 Initialization complete. Launching workers. 00:08:14.429 ======================================================== 00:08:14.429 Latency(us) 00:08:14.429 Device Information : IOPS MiB/s Average min max 00:08:14.429 PCIE (0000:00:10.0) NSID 1 from core 0: 8242.34 96.59 15546.65 8607.80 32430.71 00:08:14.429 PCIE (0000:00:13.0) NSID 1 from core 0: 8242.34 96.59 15547.34 8289.89 32140.46 00:08:14.429 PCIE (0000:00:11.0) NSID 1 from core 0: 8242.34 96.59 15533.80 6404.93 32798.78 00:08:14.429 PCIE (0000:00:12.0) NSID 1 from core 0: 8242.34 96.59 15519.34 6107.08 32193.85 00:08:14.429 PCIE (0000:00:12.0) NSID 2 from core 0: 8242.34 96.59 15505.62 5372.28 32353.66 00:08:14.429 PCIE (0000:00:12.0) NSID 3 from core 0: 8242.34 96.59 15491.84 4768.51 32401.47 00:08:14.429 ======================================================== 00:08:14.429 Total : 49454.05 579.54 15524.10 4768.51 32798.78 00:08:14.429 00:08:14.429 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:14.429 ================================================================================= 00:08:14.429 1.00000% : 10687.409us 00:08:14.429 10.00000% : 13510.498us 00:08:14.429 25.00000% : 14417.920us 00:08:14.429 50.00000% : 15526.991us 00:08:14.429 75.00000% : 16535.237us 00:08:14.429 90.00000% : 17543.483us 00:08:14.429 95.00000% : 18249.255us 00:08:14.429 98.00000% : 19862.449us 00:08:14.429 99.00000% : 21072.345us 00:08:14.429 99.50000% : 31860.578us 00:08:14.429 99.90000% : 32465.526us 00:08:14.429 99.99000% : 32465.526us 00:08:14.429 99.99900% : 32465.526us 00:08:14.429 99.99990% : 32465.526us 00:08:14.429 99.99999% : 32465.526us 00:08:14.429 00:08:14.429 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:14.430 ================================================================================= 00:08:14.430 1.00000% : 11090.708us 00:08:14.430 10.00000% : 13510.498us 00:08:14.430 25.00000% : 14417.920us 00:08:14.430 50.00000% : 15526.991us 00:08:14.430 75.00000% : 16535.237us 00:08:14.430 90.00000% : 17341.834us 00:08:14.430 95.00000% : 18652.554us 00:08:14.430 98.00000% : 20366.572us 00:08:14.430 99.00000% : 21979.766us 00:08:14.430 99.50000% : 31457.280us 00:08:14.430 99.90000% : 32062.228us 00:08:14.430 99.99000% : 32263.877us 00:08:14.430 99.99900% : 32263.877us 00:08:14.430 99.99990% : 32263.877us 00:08:14.430 99.99999% : 32263.877us 00:08:14.430 00:08:14.430 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:14.430 ================================================================================= 00:08:14.430 1.00000% : 10989.883us 00:08:14.430 10.00000% : 13409.674us 00:08:14.430 25.00000% : 14417.920us 00:08:14.430 50.00000% : 15526.991us 00:08:14.430 75.00000% : 16434.412us 00:08:14.430 90.00000% : 17442.658us 00:08:14.430 95.00000% : 18955.028us 00:08:14.430 98.00000% : 19963.274us 00:08:14.430 99.00000% : 23693.785us 00:08:14.430 99.50000% : 32263.877us 00:08:14.430 99.90000% : 32868.825us 00:08:14.430 99.99000% : 32868.825us 00:08:14.430 99.99900% : 32868.825us 00:08:14.430 99.99990% : 32868.825us 00:08:14.430 99.99999% : 32868.825us 00:08:14.430 00:08:14.430 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:14.430 ================================================================================= 00:08:14.430 1.00000% : 11141.120us 00:08:14.430 10.00000% : 13409.674us 00:08:14.430 25.00000% : 14417.920us 00:08:14.430 50.00000% : 15526.991us 00:08:14.430 75.00000% : 16434.412us 00:08:14.430 90.00000% : 17543.483us 00:08:14.430 95.00000% : 18854.203us 00:08:14.430 98.00000% : 19761.625us 00:08:14.430 99.00000% : 24097.083us 00:08:14.430 99.50000% : 31658.929us 00:08:14.430 99.90000% : 32062.228us 00:08:14.430 99.99000% : 32263.877us 00:08:14.430 99.99900% : 32263.877us 00:08:14.430 99.99990% : 32263.877us 00:08:14.430 99.99999% : 32263.877us 00:08:14.430 00:08:14.430 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:14.430 ================================================================================= 00:08:14.430 1.00000% : 10636.997us 00:08:14.430 10.00000% : 13510.498us 00:08:14.430 25.00000% : 14417.920us 00:08:14.430 50.00000% : 15526.991us 00:08:14.430 75.00000% : 16434.412us 00:08:14.430 90.00000% : 17543.483us 00:08:14.430 95.00000% : 18551.729us 00:08:14.430 98.00000% : 19459.151us 00:08:14.430 99.00000% : 24500.382us 00:08:14.430 99.50000% : 31658.929us 00:08:14.430 99.90000% : 32263.877us 00:08:14.430 99.99000% : 32465.526us 00:08:14.430 99.99900% : 32465.526us 00:08:14.430 99.99990% : 32465.526us 00:08:14.430 99.99999% : 32465.526us 00:08:14.430 00:08:14.430 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:14.430 ================================================================================= 00:08:14.430 1.00000% : 9779.988us 00:08:14.430 10.00000% : 13510.498us 00:08:14.430 25.00000% : 14317.095us 00:08:14.430 50.00000% : 15526.991us 00:08:14.430 75.00000% : 16535.237us 00:08:14.430 90.00000% : 17543.483us 00:08:14.430 95.00000% : 18249.255us 00:08:14.430 98.00000% : 19559.975us 00:08:14.430 99.00000% : 24702.031us 00:08:14.430 99.50000% : 31860.578us 00:08:14.430 99.90000% : 32465.526us 00:08:14.430 99.99000% : 32465.526us 00:08:14.430 99.99900% : 32465.526us 00:08:14.430 99.99990% : 32465.526us 00:08:14.430 99.99999% : 32465.526us 00:08:14.430 00:08:14.430 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:14.430 ============================================================================== 00:08:14.430 Range in us Cumulative IO count 00:08:14.430 8570.092 - 8620.505: 0.0242% ( 2) 00:08:14.430 8620.505 - 8670.917: 0.0606% ( 3) 00:08:14.430 8670.917 - 8721.329: 0.0969% ( 3) 00:08:14.430 8721.329 - 8771.742: 0.1453% ( 4) 00:08:14.430 8771.742 - 8822.154: 0.1938% ( 4) 00:08:14.430 8822.154 - 8872.566: 0.2301% ( 3) 00:08:14.430 8872.566 - 8922.978: 0.2665% ( 3) 00:08:14.430 8922.978 - 8973.391: 0.3028% ( 3) 00:08:14.430 8973.391 - 9023.803: 0.3149% ( 1) 00:08:14.430 9023.803 - 9074.215: 0.4239% ( 9) 00:08:14.430 9074.215 - 9124.628: 0.4603% ( 3) 00:08:14.430 9124.628 - 9175.040: 0.4966% ( 3) 00:08:14.430 9175.040 - 9225.452: 0.5329% ( 3) 00:08:14.430 9225.452 - 9275.865: 0.5572% ( 2) 00:08:14.430 9275.865 - 9326.277: 0.6056% ( 4) 00:08:14.430 9326.277 - 9376.689: 0.6541% ( 4) 00:08:14.430 9376.689 - 9427.102: 0.7025% ( 4) 00:08:14.430 9427.102 - 9477.514: 0.7267% ( 2) 00:08:14.430 9477.514 - 9527.926: 0.7631% ( 3) 00:08:14.430 9527.926 - 9578.338: 0.7752% ( 1) 00:08:14.430 10334.523 - 10384.935: 0.7873% ( 1) 00:08:14.430 10384.935 - 10435.348: 0.8358% ( 4) 00:08:14.430 10435.348 - 10485.760: 0.8842% ( 4) 00:08:14.430 10485.760 - 10536.172: 0.9448% ( 5) 00:08:14.430 10536.172 - 10586.585: 0.9569% ( 1) 00:08:14.430 10586.585 - 10636.997: 0.9811% ( 2) 00:08:14.430 10636.997 - 10687.409: 1.0296% ( 4) 00:08:14.430 10687.409 - 10737.822: 1.0538% ( 2) 00:08:14.430 10737.822 - 10788.234: 1.0780% ( 2) 00:08:14.430 10788.234 - 10838.646: 1.1265% ( 4) 00:08:14.430 10838.646 - 10889.058: 1.1507% ( 2) 00:08:14.430 10889.058 - 10939.471: 1.1870% ( 3) 00:08:14.430 10939.471 - 10989.883: 1.2234% ( 3) 00:08:14.430 10989.883 - 11040.295: 1.2476% ( 2) 00:08:14.430 11040.295 - 11090.708: 1.2960% ( 4) 00:08:14.430 11090.708 - 11141.120: 1.3203% ( 2) 00:08:14.430 11141.120 - 11191.532: 1.3566% ( 3) 00:08:14.430 11191.532 - 11241.945: 1.3808% ( 2) 00:08:14.430 11342.769 - 11393.182: 1.5383% ( 13) 00:08:14.430 11393.182 - 11443.594: 1.5504% ( 1) 00:08:14.430 11746.068 - 11796.480: 1.5988% ( 4) 00:08:14.430 11796.480 - 11846.892: 1.6231% ( 2) 00:08:14.430 11846.892 - 11897.305: 1.6594% ( 3) 00:08:14.430 11897.305 - 11947.717: 1.6836% ( 2) 00:08:14.430 11947.717 - 11998.129: 1.7200% ( 3) 00:08:14.430 11998.129 - 12048.542: 1.7321% ( 1) 00:08:14.430 12048.542 - 12098.954: 1.8169% ( 7) 00:08:14.430 12098.954 - 12149.366: 1.9622% ( 12) 00:08:14.430 12149.366 - 12199.778: 2.0591% ( 8) 00:08:14.430 12199.778 - 12250.191: 2.1439% ( 7) 00:08:14.430 12250.191 - 12300.603: 2.2529% ( 9) 00:08:14.430 12300.603 - 12351.015: 2.3861% ( 11) 00:08:14.430 12351.015 - 12401.428: 2.5194% ( 11) 00:08:14.430 12401.428 - 12451.840: 2.6647% ( 12) 00:08:14.430 12451.840 - 12502.252: 2.8464% ( 15) 00:08:14.430 12502.252 - 12552.665: 3.0281% ( 15) 00:08:14.430 12552.665 - 12603.077: 3.3309% ( 25) 00:08:14.430 12603.077 - 12653.489: 3.6216% ( 24) 00:08:14.430 12653.489 - 12703.902: 3.8033% ( 15) 00:08:14.430 12703.902 - 12754.314: 4.1667% ( 30) 00:08:14.430 12754.314 - 12804.726: 4.5058% ( 28) 00:08:14.430 12804.726 - 12855.138: 4.8813% ( 31) 00:08:14.430 12855.138 - 12905.551: 5.2326% ( 29) 00:08:14.430 12905.551 - 13006.375: 5.9714% ( 61) 00:08:14.430 13006.375 - 13107.200: 6.8072% ( 69) 00:08:14.430 13107.200 - 13208.025: 7.5097% ( 58) 00:08:14.430 13208.025 - 13308.849: 8.2728% ( 63) 00:08:14.430 13308.849 - 13409.674: 9.3387% ( 88) 00:08:14.430 13409.674 - 13510.498: 10.5620% ( 101) 00:08:14.430 13510.498 - 13611.323: 12.1609% ( 132) 00:08:14.430 13611.323 - 13712.148: 13.7355% ( 130) 00:08:14.430 13712.148 - 13812.972: 15.1647% ( 118) 00:08:14.430 13812.972 - 13913.797: 16.4608% ( 107) 00:08:14.430 13913.797 - 14014.622: 18.3382% ( 155) 00:08:14.430 14014.622 - 14115.446: 19.9128% ( 130) 00:08:14.430 14115.446 - 14216.271: 21.7902% ( 155) 00:08:14.430 14216.271 - 14317.095: 24.0431% ( 186) 00:08:14.430 14317.095 - 14417.920: 26.6473% ( 215) 00:08:14.430 14417.920 - 14518.745: 28.4157% ( 146) 00:08:14.430 14518.745 - 14619.569: 30.5838% ( 179) 00:08:14.430 14619.569 - 14720.394: 32.8246% ( 185) 00:08:14.430 14720.394 - 14821.218: 34.7626% ( 160) 00:08:14.430 14821.218 - 14922.043: 37.0155% ( 186) 00:08:14.430 14922.043 - 15022.868: 38.9656% ( 161) 00:08:14.430 15022.868 - 15123.692: 41.6788% ( 224) 00:08:14.430 15123.692 - 15224.517: 43.8953% ( 183) 00:08:14.430 15224.517 - 15325.342: 45.8333% ( 160) 00:08:14.430 15325.342 - 15426.166: 49.1158% ( 271) 00:08:14.430 15426.166 - 15526.991: 51.6109% ( 206) 00:08:14.430 15526.991 - 15627.815: 54.5300% ( 241) 00:08:14.430 15627.815 - 15728.640: 56.7466% ( 183) 00:08:14.430 15728.640 - 15829.465: 59.2297% ( 205) 00:08:14.430 15829.465 - 15930.289: 62.3183% ( 255) 00:08:14.430 15930.289 - 16031.114: 65.0194% ( 223) 00:08:14.430 16031.114 - 16131.938: 66.9937% ( 163) 00:08:14.430 16131.938 - 16232.763: 69.7796% ( 230) 00:08:14.430 16232.763 - 16333.588: 71.8266% ( 169) 00:08:14.430 16333.588 - 16434.412: 74.3823% ( 211) 00:08:14.430 16434.412 - 16535.237: 76.4656% ( 172) 00:08:14.430 16535.237 - 16636.062: 78.4399% ( 163) 00:08:14.430 16636.062 - 16736.886: 80.3052% ( 154) 00:08:14.430 16736.886 - 16837.711: 82.1948% ( 156) 00:08:14.430 16837.711 - 16938.535: 83.8421% ( 136) 00:08:14.430 16938.535 - 17039.360: 85.0412% ( 99) 00:08:14.430 17039.360 - 17140.185: 86.6642% ( 134) 00:08:14.430 17140.185 - 17241.009: 87.9482% ( 106) 00:08:14.430 17241.009 - 17341.834: 89.0262% ( 89) 00:08:14.430 17341.834 - 17442.658: 89.7287% ( 58) 00:08:14.430 17442.658 - 17543.483: 90.7825% ( 87) 00:08:14.430 17543.483 - 17644.308: 91.8968% ( 92) 00:08:14.430 17644.308 - 17745.132: 92.3450% ( 37) 00:08:14.430 17745.132 - 17845.957: 92.8900% ( 45) 00:08:14.430 17845.957 - 17946.782: 93.5804% ( 57) 00:08:14.430 17946.782 - 18047.606: 94.2103% ( 52) 00:08:14.431 18047.606 - 18148.431: 94.6705% ( 38) 00:08:14.431 18148.431 - 18249.255: 95.0339% ( 30) 00:08:14.431 18249.255 - 18350.080: 95.3731% ( 28) 00:08:14.431 18350.080 - 18450.905: 95.7001% ( 27) 00:08:14.431 18450.905 - 18551.729: 95.9787% ( 23) 00:08:14.431 18551.729 - 18652.554: 96.1240% ( 12) 00:08:14.431 18652.554 - 18753.378: 96.2815% ( 13) 00:08:14.431 18753.378 - 18854.203: 96.4632% ( 15) 00:08:14.431 18854.203 - 18955.028: 96.5843% ( 10) 00:08:14.431 18955.028 - 19055.852: 96.8629% ( 23) 00:08:14.431 19055.852 - 19156.677: 97.0688% ( 17) 00:08:14.431 19156.677 - 19257.502: 97.2263% ( 13) 00:08:14.431 19257.502 - 19358.326: 97.4443% ( 18) 00:08:14.431 19358.326 - 19459.151: 97.5048% ( 5) 00:08:14.431 19459.151 - 19559.975: 97.5654% ( 5) 00:08:14.431 19559.975 - 19660.800: 97.7350% ( 14) 00:08:14.431 19660.800 - 19761.625: 97.9530% ( 18) 00:08:14.431 19761.625 - 19862.449: 98.1105% ( 13) 00:08:14.431 19862.449 - 19963.274: 98.2316% ( 10) 00:08:14.431 19963.274 - 20064.098: 98.3406% ( 9) 00:08:14.431 20064.098 - 20164.923: 98.4617% ( 10) 00:08:14.431 20164.923 - 20265.748: 98.5828% ( 10) 00:08:14.431 20265.748 - 20366.572: 98.6797% ( 8) 00:08:14.431 20366.572 - 20467.397: 98.7766% ( 8) 00:08:14.431 20467.397 - 20568.222: 98.7888% ( 1) 00:08:14.431 20568.222 - 20669.046: 98.8493% ( 5) 00:08:14.431 20669.046 - 20769.871: 98.8857% ( 3) 00:08:14.431 20769.871 - 20870.695: 98.9099% ( 2) 00:08:14.431 20870.695 - 20971.520: 98.9704% ( 5) 00:08:14.431 20971.520 - 21072.345: 99.0431% ( 6) 00:08:14.431 21173.169 - 21273.994: 99.1158% ( 6) 00:08:14.431 21273.994 - 21374.818: 99.1279% ( 1) 00:08:14.431 21374.818 - 21475.643: 99.2248% ( 8) 00:08:14.431 30852.332 - 31053.982: 99.2611% ( 3) 00:08:14.431 31053.982 - 31255.631: 99.3096% ( 4) 00:08:14.431 31255.631 - 31457.280: 99.3944% ( 7) 00:08:14.431 31457.280 - 31658.929: 99.4913% ( 8) 00:08:14.431 31658.929 - 31860.578: 99.6366% ( 12) 00:08:14.431 31860.578 - 32062.228: 99.7456% ( 9) 00:08:14.431 32062.228 - 32263.877: 99.8910% ( 12) 00:08:14.431 32263.877 - 32465.526: 100.0000% ( 9) 00:08:14.431 00:08:14.431 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:14.431 ============================================================================== 00:08:14.431 Range in us Cumulative IO count 00:08:14.431 8267.618 - 8318.031: 0.0363% ( 3) 00:08:14.431 8318.031 - 8368.443: 0.0727% ( 3) 00:08:14.431 8368.443 - 8418.855: 0.1332% ( 5) 00:08:14.431 8418.855 - 8469.268: 0.1817% ( 4) 00:08:14.431 8469.268 - 8519.680: 0.2301% ( 4) 00:08:14.431 8519.680 - 8570.092: 0.2786% ( 4) 00:08:14.431 8570.092 - 8620.505: 0.3270% ( 4) 00:08:14.431 8620.505 - 8670.917: 0.3876% ( 5) 00:08:14.431 8670.917 - 8721.329: 0.4360% ( 4) 00:08:14.431 8721.329 - 8771.742: 0.4845% ( 4) 00:08:14.431 8771.742 - 8822.154: 0.5329% ( 4) 00:08:14.431 8822.154 - 8872.566: 0.5935% ( 5) 00:08:14.431 8872.566 - 8922.978: 0.6298% ( 3) 00:08:14.431 8922.978 - 8973.391: 0.6783% ( 4) 00:08:14.431 8973.391 - 9023.803: 0.7267% ( 4) 00:08:14.431 9023.803 - 9074.215: 0.7752% ( 4) 00:08:14.431 10788.234 - 10838.646: 0.8115% ( 3) 00:08:14.431 10838.646 - 10889.058: 0.8479% ( 3) 00:08:14.431 10889.058 - 10939.471: 0.9084% ( 5) 00:08:14.431 10939.471 - 10989.883: 0.9327% ( 2) 00:08:14.431 10989.883 - 11040.295: 0.9690% ( 3) 00:08:14.431 11040.295 - 11090.708: 1.0053% ( 3) 00:08:14.431 11090.708 - 11141.120: 1.0417% ( 3) 00:08:14.431 11141.120 - 11191.532: 1.0780% ( 3) 00:08:14.431 11191.532 - 11241.945: 1.1143% ( 3) 00:08:14.431 11241.945 - 11292.357: 1.1507% ( 3) 00:08:14.431 11292.357 - 11342.769: 1.1870% ( 3) 00:08:14.431 11342.769 - 11393.182: 1.2234% ( 3) 00:08:14.431 11393.182 - 11443.594: 1.2597% ( 3) 00:08:14.431 11443.594 - 11494.006: 1.2960% ( 3) 00:08:14.431 11494.006 - 11544.418: 1.3445% ( 4) 00:08:14.431 11544.418 - 11594.831: 1.3808% ( 3) 00:08:14.431 11594.831 - 11645.243: 1.4172% ( 3) 00:08:14.431 11645.243 - 11695.655: 1.4656% ( 4) 00:08:14.431 11695.655 - 11746.068: 1.5019% ( 3) 00:08:14.431 11746.068 - 11796.480: 1.5383% ( 3) 00:08:14.431 11796.480 - 11846.892: 1.5504% ( 1) 00:08:14.431 12048.542 - 12098.954: 1.6109% ( 5) 00:08:14.431 12098.954 - 12149.366: 1.7321% ( 10) 00:08:14.431 12149.366 - 12199.778: 1.8290% ( 8) 00:08:14.431 12199.778 - 12250.191: 1.9380% ( 9) 00:08:14.431 12250.191 - 12300.603: 2.0591% ( 10) 00:08:14.431 12300.603 - 12351.015: 2.1560% ( 8) 00:08:14.431 12351.015 - 12401.428: 2.2650% ( 9) 00:08:14.431 12401.428 - 12451.840: 2.3740% ( 9) 00:08:14.431 12451.840 - 12502.252: 2.5557% ( 15) 00:08:14.431 12502.252 - 12552.665: 2.7374% ( 15) 00:08:14.431 12552.665 - 12603.077: 3.0160% ( 23) 00:08:14.431 12603.077 - 12653.489: 3.2582% ( 20) 00:08:14.431 12653.489 - 12703.902: 3.5610% ( 25) 00:08:14.431 12703.902 - 12754.314: 3.8639% ( 25) 00:08:14.431 12754.314 - 12804.726: 4.2272% ( 30) 00:08:14.431 12804.726 - 12855.138: 4.7602% ( 44) 00:08:14.431 12855.138 - 12905.551: 5.3416% ( 48) 00:08:14.431 12905.551 - 13006.375: 6.1168% ( 64) 00:08:14.431 13006.375 - 13107.200: 6.9525% ( 69) 00:08:14.431 13107.200 - 13208.025: 7.8125% ( 71) 00:08:14.431 13208.025 - 13308.849: 8.7452% ( 77) 00:08:14.431 13308.849 - 13409.674: 9.8595% ( 92) 00:08:14.431 13409.674 - 13510.498: 10.9738% ( 92) 00:08:14.431 13510.498 - 13611.323: 12.2335% ( 104) 00:08:14.431 13611.323 - 13712.148: 13.6749% ( 119) 00:08:14.431 13712.148 - 13812.972: 15.1405% ( 121) 00:08:14.431 13812.972 - 13913.797: 16.7636% ( 134) 00:08:14.431 13913.797 - 14014.622: 18.5562% ( 148) 00:08:14.431 14014.622 - 14115.446: 20.3488% ( 148) 00:08:14.431 14115.446 - 14216.271: 22.2141% ( 154) 00:08:14.431 14216.271 - 14317.095: 24.1400% ( 159) 00:08:14.431 14317.095 - 14417.920: 25.9327% ( 148) 00:08:14.431 14417.920 - 14518.745: 27.8828% ( 161) 00:08:14.431 14518.745 - 14619.569: 30.0145% ( 176) 00:08:14.431 14619.569 - 14720.394: 32.2432% ( 184) 00:08:14.431 14720.394 - 14821.218: 34.7263% ( 205) 00:08:14.431 14821.218 - 14922.043: 37.1609% ( 201) 00:08:14.431 14922.043 - 15022.868: 39.5470% ( 197) 00:08:14.431 15022.868 - 15123.692: 41.7272% ( 180) 00:08:14.431 15123.692 - 15224.517: 43.9438% ( 183) 00:08:14.431 15224.517 - 15325.342: 46.5237% ( 213) 00:08:14.431 15325.342 - 15426.166: 49.1158% ( 214) 00:08:14.431 15426.166 - 15526.991: 51.6836% ( 212) 00:08:14.431 15526.991 - 15627.815: 54.3362% ( 219) 00:08:14.431 15627.815 - 15728.640: 57.0736% ( 226) 00:08:14.431 15728.640 - 15829.465: 59.5688% ( 206) 00:08:14.431 15829.465 - 15930.289: 62.2214% ( 219) 00:08:14.431 15930.289 - 16031.114: 64.9588% ( 226) 00:08:14.431 16031.114 - 16131.938: 67.6235% ( 220) 00:08:14.431 16131.938 - 16232.763: 70.2277% ( 215) 00:08:14.431 16232.763 - 16333.588: 72.6260% ( 198) 00:08:14.431 16333.588 - 16434.412: 74.9394% ( 191) 00:08:14.431 16434.412 - 16535.237: 77.1318% ( 181) 00:08:14.431 16535.237 - 16636.062: 79.3605% ( 184) 00:08:14.431 16636.062 - 16736.886: 81.6860% ( 192) 00:08:14.431 16736.886 - 16837.711: 83.6967% ( 166) 00:08:14.431 16837.711 - 16938.535: 85.5136% ( 150) 00:08:14.431 16938.535 - 17039.360: 87.1366% ( 134) 00:08:14.431 17039.360 - 17140.185: 88.5538% ( 117) 00:08:14.431 17140.185 - 17241.009: 89.7287% ( 97) 00:08:14.431 17241.009 - 17341.834: 90.9399% ( 100) 00:08:14.431 17341.834 - 17442.658: 91.7636% ( 68) 00:08:14.431 17442.658 - 17543.483: 92.3450% ( 48) 00:08:14.431 17543.483 - 17644.308: 92.7689% ( 35) 00:08:14.431 17644.308 - 17745.132: 93.0959% ( 27) 00:08:14.431 17745.132 - 17845.957: 93.3987% ( 25) 00:08:14.431 17845.957 - 17946.782: 93.6773% ( 23) 00:08:14.431 17946.782 - 18047.606: 93.8711% ( 16) 00:08:14.431 18047.606 - 18148.431: 94.1134% ( 20) 00:08:14.431 18148.431 - 18249.255: 94.3798% ( 22) 00:08:14.431 18249.255 - 18350.080: 94.5615% ( 15) 00:08:14.431 18350.080 - 18450.905: 94.7069% ( 12) 00:08:14.431 18450.905 - 18551.729: 94.8643% ( 13) 00:08:14.431 18551.729 - 18652.554: 95.1066% ( 20) 00:08:14.431 18652.554 - 18753.378: 95.3731% ( 22) 00:08:14.431 18753.378 - 18854.203: 95.6032% ( 19) 00:08:14.431 18854.203 - 18955.028: 95.7849% ( 15) 00:08:14.431 18955.028 - 19055.852: 96.0756% ( 24) 00:08:14.431 19055.852 - 19156.677: 96.1967% ( 10) 00:08:14.431 19156.677 - 19257.502: 96.3663% ( 14) 00:08:14.431 19257.502 - 19358.326: 96.5843% ( 18) 00:08:14.431 19358.326 - 19459.151: 96.7660% ( 15) 00:08:14.431 19459.151 - 19559.975: 96.9719% ( 17) 00:08:14.431 19559.975 - 19660.800: 97.1536% ( 15) 00:08:14.431 19660.800 - 19761.625: 97.3232% ( 14) 00:08:14.431 19761.625 - 19862.449: 97.4443% ( 10) 00:08:14.431 19862.449 - 19963.274: 97.6139% ( 14) 00:08:14.431 19963.274 - 20064.098: 97.7471% ( 11) 00:08:14.431 20064.098 - 20164.923: 97.8682% ( 10) 00:08:14.431 20164.923 - 20265.748: 97.9651% ( 8) 00:08:14.431 20265.748 - 20366.572: 98.0378% ( 6) 00:08:14.431 20366.572 - 20467.397: 98.1105% ( 6) 00:08:14.431 20467.397 - 20568.222: 98.1831% ( 6) 00:08:14.431 20568.222 - 20669.046: 98.2679% ( 7) 00:08:14.431 20669.046 - 20769.871: 98.3648% ( 8) 00:08:14.431 20769.871 - 20870.695: 98.5102% ( 12) 00:08:14.431 20870.695 - 20971.520: 98.5707% ( 5) 00:08:14.431 20971.520 - 21072.345: 98.6313% ( 5) 00:08:14.431 21072.345 - 21173.169: 98.6797% ( 4) 00:08:14.431 21173.169 - 21273.994: 98.7282% ( 4) 00:08:14.431 21273.994 - 21374.818: 98.7766% ( 4) 00:08:14.431 21374.818 - 21475.643: 98.8251% ( 4) 00:08:14.431 21475.643 - 21576.468: 98.8493% ( 2) 00:08:14.431 21576.468 - 21677.292: 98.8857% ( 3) 00:08:14.431 21677.292 - 21778.117: 98.9220% ( 3) 00:08:14.431 21778.117 - 21878.942: 98.9704% ( 4) 00:08:14.432 21878.942 - 21979.766: 99.0310% ( 5) 00:08:14.432 21979.766 - 22080.591: 99.0795% ( 4) 00:08:14.432 22080.591 - 22181.415: 99.1279% ( 4) 00:08:14.432 22181.415 - 22282.240: 99.1885% ( 5) 00:08:14.432 22282.240 - 22383.065: 99.2248% ( 3) 00:08:14.432 31053.982 - 31255.631: 99.3702% ( 12) 00:08:14.432 31255.631 - 31457.280: 99.5034% ( 11) 00:08:14.432 31457.280 - 31658.929: 99.6487% ( 12) 00:08:14.432 31658.929 - 31860.578: 99.7820% ( 11) 00:08:14.432 31860.578 - 32062.228: 99.9394% ( 13) 00:08:14.432 32062.228 - 32263.877: 100.0000% ( 5) 00:08:14.432 00:08:14.432 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:14.432 ============================================================================== 00:08:14.432 Range in us Cumulative IO count 00:08:14.432 6402.363 - 6427.569: 0.0242% ( 2) 00:08:14.432 6427.569 - 6452.775: 0.0606% ( 3) 00:08:14.432 6452.775 - 6503.188: 0.0969% ( 3) 00:08:14.432 6503.188 - 6553.600: 0.1453% ( 4) 00:08:14.432 6553.600 - 6604.012: 0.2059% ( 5) 00:08:14.432 6604.012 - 6654.425: 0.2544% ( 4) 00:08:14.432 6654.425 - 6704.837: 0.3028% ( 4) 00:08:14.432 6704.837 - 6755.249: 0.3513% ( 4) 00:08:14.432 6755.249 - 6805.662: 0.3997% ( 4) 00:08:14.432 6805.662 - 6856.074: 0.4360% ( 3) 00:08:14.432 6856.074 - 6906.486: 0.4603% ( 2) 00:08:14.432 6906.486 - 6956.898: 0.4845% ( 2) 00:08:14.432 6956.898 - 7007.311: 0.5087% ( 2) 00:08:14.432 7007.311 - 7057.723: 0.5451% ( 3) 00:08:14.432 7057.723 - 7108.135: 0.5693% ( 2) 00:08:14.432 7108.135 - 7158.548: 0.5935% ( 2) 00:08:14.432 7158.548 - 7208.960: 0.6177% ( 2) 00:08:14.432 7208.960 - 7259.372: 0.6420% ( 2) 00:08:14.432 7259.372 - 7309.785: 0.6783% ( 3) 00:08:14.432 7309.785 - 7360.197: 0.7025% ( 2) 00:08:14.432 7360.197 - 7410.609: 0.7267% ( 2) 00:08:14.432 7410.609 - 7461.022: 0.7631% ( 3) 00:08:14.432 7461.022 - 7511.434: 0.7752% ( 1) 00:08:14.432 10636.997 - 10687.409: 0.7873% ( 1) 00:08:14.432 10687.409 - 10737.822: 0.8479% ( 5) 00:08:14.432 10737.822 - 10788.234: 0.8842% ( 3) 00:08:14.432 10788.234 - 10838.646: 0.9205% ( 3) 00:08:14.432 10838.646 - 10889.058: 0.9569% ( 3) 00:08:14.432 10889.058 - 10939.471: 0.9811% ( 2) 00:08:14.432 10939.471 - 10989.883: 1.0174% ( 3) 00:08:14.432 10989.883 - 11040.295: 1.0538% ( 3) 00:08:14.432 11040.295 - 11090.708: 1.1022% ( 4) 00:08:14.432 11090.708 - 11141.120: 1.1386% ( 3) 00:08:14.432 11141.120 - 11191.532: 1.1749% ( 3) 00:08:14.432 11191.532 - 11241.945: 1.2112% ( 3) 00:08:14.432 11241.945 - 11292.357: 1.2476% ( 3) 00:08:14.432 11292.357 - 11342.769: 1.2960% ( 4) 00:08:14.432 11342.769 - 11393.182: 1.3324% ( 3) 00:08:14.432 11393.182 - 11443.594: 1.3808% ( 4) 00:08:14.432 11443.594 - 11494.006: 1.4293% ( 4) 00:08:14.432 11494.006 - 11544.418: 1.5019% ( 6) 00:08:14.432 11544.418 - 11594.831: 1.6109% ( 9) 00:08:14.432 11594.831 - 11645.243: 1.6715% ( 5) 00:08:14.432 11645.243 - 11695.655: 1.7805% ( 9) 00:08:14.432 11695.655 - 11746.068: 1.8411% ( 5) 00:08:14.432 11746.068 - 11796.480: 1.8895% ( 4) 00:08:14.432 11796.480 - 11846.892: 1.9501% ( 5) 00:08:14.432 11846.892 - 11897.305: 2.0228% ( 6) 00:08:14.432 11897.305 - 11947.717: 2.0833% ( 5) 00:08:14.432 11947.717 - 11998.129: 2.1560% ( 6) 00:08:14.432 11998.129 - 12048.542: 2.2166% ( 5) 00:08:14.432 12048.542 - 12098.954: 2.2892% ( 6) 00:08:14.432 12098.954 - 12149.366: 2.3498% ( 5) 00:08:14.432 12149.366 - 12199.778: 2.4104% ( 5) 00:08:14.432 12199.778 - 12250.191: 2.4709% ( 5) 00:08:14.432 12250.191 - 12300.603: 2.5557% ( 7) 00:08:14.432 12300.603 - 12351.015: 2.6890% ( 11) 00:08:14.432 12351.015 - 12401.428: 2.8585% ( 14) 00:08:14.432 12401.428 - 12451.840: 3.0039% ( 12) 00:08:14.432 12451.840 - 12502.252: 3.1977% ( 16) 00:08:14.432 12502.252 - 12552.665: 3.4157% ( 18) 00:08:14.432 12552.665 - 12603.077: 3.6943% ( 23) 00:08:14.432 12603.077 - 12653.489: 4.0455% ( 29) 00:08:14.432 12653.489 - 12703.902: 4.3484% ( 25) 00:08:14.432 12703.902 - 12754.314: 4.6148% ( 22) 00:08:14.432 12754.314 - 12804.726: 4.9055% ( 24) 00:08:14.432 12804.726 - 12855.138: 5.2810% ( 31) 00:08:14.432 12855.138 - 12905.551: 5.6080% ( 27) 00:08:14.432 12905.551 - 13006.375: 6.3469% ( 61) 00:08:14.432 13006.375 - 13107.200: 7.2674% ( 76) 00:08:14.432 13107.200 - 13208.025: 8.2001% ( 77) 00:08:14.432 13208.025 - 13308.849: 9.0359% ( 69) 00:08:14.432 13308.849 - 13409.674: 10.1502% ( 92) 00:08:14.432 13409.674 - 13510.498: 11.2524% ( 91) 00:08:14.432 13510.498 - 13611.323: 12.7180% ( 121) 00:08:14.432 13611.323 - 13712.148: 14.2684% ( 128) 00:08:14.432 13712.148 - 13812.972: 15.9520% ( 139) 00:08:14.432 13812.972 - 13913.797: 17.5630% ( 133) 00:08:14.432 13913.797 - 14014.622: 19.2224% ( 137) 00:08:14.432 14014.622 - 14115.446: 21.0271% ( 149) 00:08:14.432 14115.446 - 14216.271: 22.8682% ( 152) 00:08:14.432 14216.271 - 14317.095: 24.6851% ( 150) 00:08:14.432 14317.095 - 14417.920: 26.4414% ( 145) 00:08:14.432 14417.920 - 14518.745: 28.0039% ( 129) 00:08:14.432 14518.745 - 14619.569: 29.6027% ( 132) 00:08:14.432 14619.569 - 14720.394: 31.3469% ( 144) 00:08:14.432 14720.394 - 14821.218: 33.1395% ( 148) 00:08:14.432 14821.218 - 14922.043: 35.2955% ( 178) 00:08:14.432 14922.043 - 15022.868: 37.6332% ( 193) 00:08:14.432 15022.868 - 15123.692: 40.1163% ( 205) 00:08:14.432 15123.692 - 15224.517: 43.0475% ( 242) 00:08:14.432 15224.517 - 15325.342: 46.0756% ( 250) 00:08:14.432 15325.342 - 15426.166: 49.3944% ( 274) 00:08:14.432 15426.166 - 15526.991: 52.6284% ( 267) 00:08:14.432 15526.991 - 15627.815: 55.6807% ( 252) 00:08:14.432 15627.815 - 15728.640: 58.7936% ( 257) 00:08:14.432 15728.640 - 15829.465: 61.9307% ( 259) 00:08:14.432 15829.465 - 15930.289: 64.9830% ( 252) 00:08:14.432 15930.289 - 16031.114: 67.6478% ( 220) 00:08:14.432 16031.114 - 16131.938: 70.3004% ( 219) 00:08:14.432 16131.938 - 16232.763: 72.6017% ( 190) 00:08:14.432 16232.763 - 16333.588: 74.7335% ( 176) 00:08:14.432 16333.588 - 16434.412: 76.8290% ( 173) 00:08:14.432 16434.412 - 16535.237: 78.9365% ( 174) 00:08:14.432 16535.237 - 16636.062: 80.6928% ( 145) 00:08:14.432 16636.062 - 16736.886: 82.3159% ( 134) 00:08:14.432 16736.886 - 16837.711: 83.9147% ( 132) 00:08:14.432 16837.711 - 16938.535: 85.2592% ( 111) 00:08:14.432 16938.535 - 17039.360: 86.3978% ( 94) 00:08:14.432 17039.360 - 17140.185: 87.5000% ( 91) 00:08:14.432 17140.185 - 17241.009: 88.5174% ( 84) 00:08:14.432 17241.009 - 17341.834: 89.4380% ( 76) 00:08:14.432 17341.834 - 17442.658: 90.2495% ( 67) 00:08:14.432 17442.658 - 17543.483: 90.9641% ( 59) 00:08:14.432 17543.483 - 17644.308: 91.5334% ( 47) 00:08:14.432 17644.308 - 17745.132: 92.0785% ( 45) 00:08:14.432 17745.132 - 17845.957: 92.4782% ( 33) 00:08:14.432 17845.957 - 17946.782: 92.8658% ( 32) 00:08:14.432 17946.782 - 18047.606: 93.1686% ( 25) 00:08:14.432 18047.606 - 18148.431: 93.4230% ( 21) 00:08:14.432 18148.431 - 18249.255: 93.7137% ( 24) 00:08:14.432 18249.255 - 18350.080: 93.9922% ( 23) 00:08:14.432 18350.080 - 18450.905: 94.2829% ( 24) 00:08:14.432 18450.905 - 18551.729: 94.4646% ( 15) 00:08:14.432 18551.729 - 18652.554: 94.6705% ( 17) 00:08:14.432 18652.554 - 18753.378: 94.8280% ( 13) 00:08:14.432 18753.378 - 18854.203: 94.9855% ( 13) 00:08:14.432 18854.203 - 18955.028: 95.3246% ( 28) 00:08:14.432 18955.028 - 19055.852: 95.6274% ( 25) 00:08:14.432 19055.852 - 19156.677: 95.9545% ( 27) 00:08:14.432 19156.677 - 19257.502: 96.2330% ( 23) 00:08:14.432 19257.502 - 19358.326: 96.5359% ( 25) 00:08:14.432 19358.326 - 19459.151: 96.8266% ( 24) 00:08:14.432 19459.151 - 19559.975: 97.1051% ( 23) 00:08:14.432 19559.975 - 19660.800: 97.3716% ( 22) 00:08:14.432 19660.800 - 19761.625: 97.6623% ( 24) 00:08:14.432 19761.625 - 19862.449: 97.9046% ( 20) 00:08:14.432 19862.449 - 19963.274: 98.0620% ( 13) 00:08:14.432 19963.274 - 20064.098: 98.2074% ( 12) 00:08:14.432 20064.098 - 20164.923: 98.3648% ( 13) 00:08:14.432 20164.923 - 20265.748: 98.4496% ( 7) 00:08:14.432 22181.415 - 22282.240: 98.4738% ( 2) 00:08:14.432 22282.240 - 22383.065: 98.5223% ( 4) 00:08:14.432 22383.065 - 22483.889: 98.5707% ( 4) 00:08:14.432 22483.889 - 22584.714: 98.6071% ( 3) 00:08:14.432 22584.714 - 22685.538: 98.6313% ( 2) 00:08:14.432 22685.538 - 22786.363: 98.6676% ( 3) 00:08:14.432 22786.363 - 22887.188: 98.7040% ( 3) 00:08:14.432 22887.188 - 22988.012: 98.7282% ( 2) 00:08:14.432 22988.012 - 23088.837: 98.7645% ( 3) 00:08:14.432 23088.837 - 23189.662: 98.8130% ( 4) 00:08:14.432 23189.662 - 23290.486: 98.8493% ( 3) 00:08:14.432 23290.486 - 23391.311: 98.8857% ( 3) 00:08:14.432 23391.311 - 23492.135: 98.9220% ( 3) 00:08:14.432 23492.135 - 23592.960: 98.9704% ( 4) 00:08:14.432 23592.960 - 23693.785: 99.0068% ( 3) 00:08:14.432 23693.785 - 23794.609: 99.0552% ( 4) 00:08:14.432 23794.609 - 23895.434: 99.0916% ( 3) 00:08:14.432 23895.434 - 23996.258: 99.1400% ( 4) 00:08:14.432 23996.258 - 24097.083: 99.1764% ( 3) 00:08:14.432 24097.083 - 24197.908: 99.2127% ( 3) 00:08:14.432 24197.908 - 24298.732: 99.2248% ( 1) 00:08:14.432 31658.929 - 31860.578: 99.3338% ( 9) 00:08:14.432 31860.578 - 32062.228: 99.4671% ( 11) 00:08:14.432 32062.228 - 32263.877: 99.6003% ( 11) 00:08:14.432 32263.877 - 32465.526: 99.7456% ( 12) 00:08:14.432 32465.526 - 32667.175: 99.8910% ( 12) 00:08:14.432 32667.175 - 32868.825: 100.0000% ( 9) 00:08:14.432 00:08:14.432 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:14.432 ============================================================================== 00:08:14.432 Range in us Cumulative IO count 00:08:14.433 6099.889 - 6125.095: 0.0969% ( 8) 00:08:14.433 6125.095 - 6150.302: 0.1332% ( 3) 00:08:14.433 6150.302 - 6175.508: 0.1575% ( 2) 00:08:14.433 6175.508 - 6200.714: 0.1696% ( 1) 00:08:14.433 6200.714 - 6225.920: 0.1938% ( 2) 00:08:14.433 6225.920 - 6251.126: 0.2180% ( 2) 00:08:14.433 6251.126 - 6276.332: 0.2422% ( 2) 00:08:14.433 6276.332 - 6301.538: 0.2786% ( 3) 00:08:14.433 6301.538 - 6326.745: 0.2907% ( 1) 00:08:14.433 6326.745 - 6351.951: 0.3149% ( 2) 00:08:14.433 6351.951 - 6377.157: 0.3391% ( 2) 00:08:14.433 6377.157 - 6402.363: 0.3634% ( 2) 00:08:14.433 6402.363 - 6427.569: 0.3755% ( 1) 00:08:14.433 6427.569 - 6452.775: 0.3997% ( 2) 00:08:14.433 6452.775 - 6503.188: 0.4603% ( 5) 00:08:14.433 6503.188 - 6553.600: 0.5087% ( 4) 00:08:14.433 6553.600 - 6604.012: 0.5572% ( 4) 00:08:14.433 6604.012 - 6654.425: 0.6056% ( 4) 00:08:14.433 6654.425 - 6704.837: 0.6662% ( 5) 00:08:14.433 6704.837 - 6755.249: 0.7146% ( 4) 00:08:14.433 6755.249 - 6805.662: 0.7631% ( 4) 00:08:14.433 6805.662 - 6856.074: 0.7752% ( 1) 00:08:14.433 10838.646 - 10889.058: 0.7873% ( 1) 00:08:14.433 10889.058 - 10939.471: 0.8600% ( 6) 00:08:14.433 10939.471 - 10989.883: 0.8963% ( 3) 00:08:14.433 10989.883 - 11040.295: 0.9448% ( 4) 00:08:14.433 11040.295 - 11090.708: 0.9811% ( 3) 00:08:14.433 11090.708 - 11141.120: 1.0174% ( 3) 00:08:14.433 11141.120 - 11191.532: 1.0538% ( 3) 00:08:14.433 11191.532 - 11241.945: 1.0901% ( 3) 00:08:14.433 11241.945 - 11292.357: 1.1265% ( 3) 00:08:14.433 11292.357 - 11342.769: 1.1507% ( 2) 00:08:14.433 11342.769 - 11393.182: 1.1991% ( 4) 00:08:14.433 11393.182 - 11443.594: 1.2355% ( 3) 00:08:14.433 11443.594 - 11494.006: 1.2718% ( 3) 00:08:14.433 11494.006 - 11544.418: 1.3203% ( 4) 00:08:14.433 11544.418 - 11594.831: 1.3808% ( 5) 00:08:14.433 11594.831 - 11645.243: 1.4293% ( 4) 00:08:14.433 11645.243 - 11695.655: 1.5141% ( 7) 00:08:14.433 11695.655 - 11746.068: 1.6473% ( 11) 00:08:14.433 11746.068 - 11796.480: 1.7926% ( 12) 00:08:14.433 11796.480 - 11846.892: 1.9016% ( 9) 00:08:14.433 11846.892 - 11897.305: 2.0228% ( 10) 00:08:14.433 11897.305 - 11947.717: 2.1560% ( 11) 00:08:14.433 11947.717 - 11998.129: 2.2771% ( 10) 00:08:14.433 11998.129 - 12048.542: 2.3983% ( 10) 00:08:14.433 12048.542 - 12098.954: 2.5557% ( 13) 00:08:14.433 12098.954 - 12149.366: 2.7374% ( 15) 00:08:14.433 12149.366 - 12199.778: 2.9070% ( 14) 00:08:14.433 12199.778 - 12250.191: 3.1492% ( 20) 00:08:14.433 12250.191 - 12300.603: 3.3188% ( 14) 00:08:14.433 12300.603 - 12351.015: 3.5126% ( 16) 00:08:14.433 12351.015 - 12401.428: 3.6701% ( 13) 00:08:14.433 12401.428 - 12451.840: 3.8154% ( 12) 00:08:14.433 12451.840 - 12502.252: 4.0940% ( 23) 00:08:14.433 12502.252 - 12552.665: 4.3241% ( 19) 00:08:14.433 12552.665 - 12603.077: 4.5906% ( 22) 00:08:14.433 12603.077 - 12653.489: 4.9176% ( 27) 00:08:14.433 12653.489 - 12703.902: 5.2204% ( 25) 00:08:14.433 12703.902 - 12754.314: 5.5233% ( 25) 00:08:14.433 12754.314 - 12804.726: 5.8745% ( 29) 00:08:14.433 12804.726 - 12855.138: 6.2500% ( 31) 00:08:14.433 12855.138 - 12905.551: 6.6013% ( 29) 00:08:14.433 12905.551 - 13006.375: 7.3280% ( 60) 00:08:14.433 13006.375 - 13107.200: 8.0790% ( 62) 00:08:14.433 13107.200 - 13208.025: 8.9268% ( 70) 00:08:14.433 13208.025 - 13308.849: 9.8716% ( 78) 00:08:14.433 13308.849 - 13409.674: 11.2040% ( 110) 00:08:14.433 13409.674 - 13510.498: 12.5969% ( 115) 00:08:14.433 13510.498 - 13611.323: 14.0383% ( 119) 00:08:14.433 13611.323 - 13712.148: 15.4918% ( 120) 00:08:14.433 13712.148 - 13812.972: 16.7999% ( 108) 00:08:14.433 13812.972 - 13913.797: 18.2413% ( 119) 00:08:14.433 13913.797 - 14014.622: 19.6584% ( 117) 00:08:14.433 14014.622 - 14115.446: 20.9545% ( 107) 00:08:14.433 14115.446 - 14216.271: 22.3716% ( 117) 00:08:14.433 14216.271 - 14317.095: 23.9220% ( 128) 00:08:14.433 14317.095 - 14417.920: 25.9084% ( 164) 00:08:14.433 14417.920 - 14518.745: 27.6526% ( 144) 00:08:14.433 14518.745 - 14619.569: 29.5543% ( 157) 00:08:14.433 14619.569 - 14720.394: 31.6860% ( 176) 00:08:14.433 14720.394 - 14821.218: 33.9995% ( 191) 00:08:14.433 14821.218 - 14922.043: 36.4583% ( 203) 00:08:14.433 14922.043 - 15022.868: 39.0867% ( 217) 00:08:14.433 15022.868 - 15123.692: 41.4850% ( 198) 00:08:14.433 15123.692 - 15224.517: 44.0286% ( 210) 00:08:14.433 15224.517 - 15325.342: 46.8266% ( 231) 00:08:14.433 15325.342 - 15426.166: 49.7214% ( 239) 00:08:14.433 15426.166 - 15526.991: 52.5557% ( 234) 00:08:14.433 15526.991 - 15627.815: 55.4264% ( 237) 00:08:14.433 15627.815 - 15728.640: 58.3818% ( 244) 00:08:14.433 15728.640 - 15829.465: 61.1555% ( 229) 00:08:14.433 15829.465 - 15930.289: 63.8687% ( 224) 00:08:14.433 15930.289 - 16031.114: 66.5940% ( 225) 00:08:14.433 16031.114 - 16131.938: 69.2951% ( 223) 00:08:14.433 16131.938 - 16232.763: 72.0203% ( 225) 00:08:14.433 16232.763 - 16333.588: 74.2975% ( 188) 00:08:14.433 16333.588 - 16434.412: 76.3203% ( 167) 00:08:14.433 16434.412 - 16535.237: 78.3915% ( 171) 00:08:14.433 16535.237 - 16636.062: 80.2083% ( 150) 00:08:14.433 16636.062 - 16736.886: 81.7103% ( 124) 00:08:14.433 16736.886 - 16837.711: 83.2364% ( 126) 00:08:14.433 16837.711 - 16938.535: 84.4598% ( 101) 00:08:14.433 16938.535 - 17039.360: 85.5620% ( 91) 00:08:14.433 17039.360 - 17140.185: 86.6400% ( 89) 00:08:14.433 17140.185 - 17241.009: 87.5606% ( 76) 00:08:14.433 17241.009 - 17341.834: 88.5053% ( 78) 00:08:14.433 17341.834 - 17442.658: 89.4864% ( 81) 00:08:14.433 17442.658 - 17543.483: 90.2859% ( 66) 00:08:14.433 17543.483 - 17644.308: 90.9641% ( 56) 00:08:14.433 17644.308 - 17745.132: 91.7151% ( 62) 00:08:14.433 17745.132 - 17845.957: 92.3571% ( 53) 00:08:14.433 17845.957 - 17946.782: 92.8537% ( 41) 00:08:14.433 17946.782 - 18047.606: 93.2413% ( 32) 00:08:14.433 18047.606 - 18148.431: 93.5925% ( 29) 00:08:14.433 18148.431 - 18249.255: 93.7984% ( 17) 00:08:14.433 18249.255 - 18350.080: 93.9680% ( 14) 00:08:14.433 18350.080 - 18450.905: 94.1982% ( 19) 00:08:14.433 18450.905 - 18551.729: 94.4646% ( 22) 00:08:14.433 18551.729 - 18652.554: 94.7311% ( 22) 00:08:14.433 18652.554 - 18753.378: 94.9128% ( 15) 00:08:14.433 18753.378 - 18854.203: 95.1550% ( 20) 00:08:14.433 18854.203 - 18955.028: 95.3731% ( 18) 00:08:14.433 18955.028 - 19055.852: 95.7243% ( 29) 00:08:14.433 19055.852 - 19156.677: 96.0635% ( 28) 00:08:14.433 19156.677 - 19257.502: 96.4511% ( 32) 00:08:14.433 19257.502 - 19358.326: 96.8266% ( 31) 00:08:14.433 19358.326 - 19459.151: 97.1778% ( 29) 00:08:14.433 19459.151 - 19559.975: 97.5048% ( 27) 00:08:14.433 19559.975 - 19660.800: 97.8198% ( 26) 00:08:14.433 19660.800 - 19761.625: 98.0378% ( 18) 00:08:14.433 19761.625 - 19862.449: 98.1953% ( 13) 00:08:14.433 19862.449 - 19963.274: 98.3527% ( 13) 00:08:14.433 19963.274 - 20064.098: 98.4375% ( 7) 00:08:14.433 20064.098 - 20164.923: 98.4496% ( 1) 00:08:14.433 22584.714 - 22685.538: 98.4859% ( 3) 00:08:14.433 22685.538 - 22786.363: 98.5344% ( 4) 00:08:14.433 22786.363 - 22887.188: 98.5828% ( 4) 00:08:14.433 22887.188 - 22988.012: 98.6071% ( 2) 00:08:14.433 22988.012 - 23088.837: 98.6434% ( 3) 00:08:14.433 23088.837 - 23189.662: 98.6797% ( 3) 00:08:14.433 23189.662 - 23290.486: 98.7161% ( 3) 00:08:14.433 23290.486 - 23391.311: 98.7524% ( 3) 00:08:14.433 23391.311 - 23492.135: 98.7888% ( 3) 00:08:14.433 23492.135 - 23592.960: 98.8251% ( 3) 00:08:14.433 23592.960 - 23693.785: 98.8735% ( 4) 00:08:14.433 23693.785 - 23794.609: 98.9099% ( 3) 00:08:14.433 23794.609 - 23895.434: 98.9462% ( 3) 00:08:14.433 23895.434 - 23996.258: 98.9947% ( 4) 00:08:14.433 23996.258 - 24097.083: 99.0310% ( 3) 00:08:14.433 24097.083 - 24197.908: 99.0795% ( 4) 00:08:14.433 24197.908 - 24298.732: 99.1158% ( 3) 00:08:14.433 24298.732 - 24399.557: 99.1642% ( 4) 00:08:14.433 24399.557 - 24500.382: 99.2006% ( 3) 00:08:14.433 24500.382 - 24601.206: 99.2248% ( 2) 00:08:14.433 31053.982 - 31255.631: 99.3338% ( 9) 00:08:14.433 31255.631 - 31457.280: 99.4913% ( 13) 00:08:14.433 31457.280 - 31658.929: 99.6245% ( 11) 00:08:14.433 31658.929 - 31860.578: 99.7699% ( 12) 00:08:14.433 31860.578 - 32062.228: 99.9152% ( 12) 00:08:14.433 32062.228 - 32263.877: 100.0000% ( 7) 00:08:14.433 00:08:14.433 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:14.433 ============================================================================== 00:08:14.433 Range in us Cumulative IO count 00:08:14.433 5368.911 - 5394.117: 0.0363% ( 3) 00:08:14.433 5394.117 - 5419.323: 0.0848% ( 4) 00:08:14.433 5419.323 - 5444.529: 0.1211% ( 3) 00:08:14.433 5444.529 - 5469.735: 0.1575% ( 3) 00:08:14.433 5494.942 - 5520.148: 0.1817% ( 2) 00:08:14.433 5520.148 - 5545.354: 0.2059% ( 2) 00:08:14.433 5545.354 - 5570.560: 0.2301% ( 2) 00:08:14.433 5570.560 - 5595.766: 0.2544% ( 2) 00:08:14.433 5595.766 - 5620.972: 0.2786% ( 2) 00:08:14.433 5620.972 - 5646.178: 0.3028% ( 2) 00:08:14.433 5646.178 - 5671.385: 0.3391% ( 3) 00:08:14.433 5671.385 - 5696.591: 0.3634% ( 2) 00:08:14.433 5696.591 - 5721.797: 0.3876% ( 2) 00:08:14.433 5721.797 - 5747.003: 0.4118% ( 2) 00:08:14.433 5747.003 - 5772.209: 0.4360% ( 2) 00:08:14.433 5772.209 - 5797.415: 0.4482% ( 1) 00:08:14.433 5797.415 - 5822.622: 0.4724% ( 2) 00:08:14.433 5822.622 - 5847.828: 0.4966% ( 2) 00:08:14.433 5847.828 - 5873.034: 0.5087% ( 1) 00:08:14.433 5873.034 - 5898.240: 0.5329% ( 2) 00:08:14.433 5898.240 - 5923.446: 0.5572% ( 2) 00:08:14.433 5923.446 - 5948.652: 0.5814% ( 2) 00:08:14.433 5948.652 - 5973.858: 0.6056% ( 2) 00:08:14.434 5973.858 - 5999.065: 0.6298% ( 2) 00:08:14.434 5999.065 - 6024.271: 0.6541% ( 2) 00:08:14.434 6024.271 - 6049.477: 0.6783% ( 2) 00:08:14.434 6049.477 - 6074.683: 0.7025% ( 2) 00:08:14.434 6074.683 - 6099.889: 0.7267% ( 2) 00:08:14.434 6099.889 - 6125.095: 0.7510% ( 2) 00:08:14.434 6125.095 - 6150.302: 0.7752% ( 2) 00:08:14.434 10384.935 - 10435.348: 0.7873% ( 1) 00:08:14.434 10435.348 - 10485.760: 0.8842% ( 8) 00:08:14.434 10485.760 - 10536.172: 0.9205% ( 3) 00:08:14.434 10536.172 - 10586.585: 0.9569% ( 3) 00:08:14.434 10586.585 - 10636.997: 1.0296% ( 6) 00:08:14.434 10636.997 - 10687.409: 1.0780% ( 4) 00:08:14.434 10687.409 - 10737.822: 1.1265% ( 4) 00:08:14.434 10737.822 - 10788.234: 1.1749% ( 4) 00:08:14.434 10788.234 - 10838.646: 1.2234% ( 4) 00:08:14.434 10838.646 - 10889.058: 1.2960% ( 6) 00:08:14.434 10889.058 - 10939.471: 1.3687% ( 6) 00:08:14.434 10939.471 - 10989.883: 1.4656% ( 8) 00:08:14.434 10989.883 - 11040.295: 1.5504% ( 7) 00:08:14.434 11040.295 - 11090.708: 1.6352% ( 7) 00:08:14.434 11090.708 - 11141.120: 1.7200% ( 7) 00:08:14.434 11141.120 - 11191.532: 1.7926% ( 6) 00:08:14.434 11191.532 - 11241.945: 1.8290% ( 3) 00:08:14.434 11241.945 - 11292.357: 1.8653% ( 3) 00:08:14.434 11292.357 - 11342.769: 1.9259% ( 5) 00:08:14.434 11342.769 - 11393.182: 2.0470% ( 10) 00:08:14.434 11393.182 - 11443.594: 2.1076% ( 5) 00:08:14.434 11443.594 - 11494.006: 2.1923% ( 7) 00:08:14.434 11494.006 - 11544.418: 2.2650% ( 6) 00:08:14.434 11544.418 - 11594.831: 2.3377% ( 6) 00:08:14.434 11594.831 - 11645.243: 2.4346% ( 8) 00:08:14.434 11645.243 - 11695.655: 2.5436% ( 9) 00:08:14.434 11695.655 - 11746.068: 2.7132% ( 14) 00:08:14.434 11746.068 - 11796.480: 2.8464% ( 11) 00:08:14.434 11796.480 - 11846.892: 2.9918% ( 12) 00:08:14.434 11846.892 - 11897.305: 3.1371% ( 12) 00:08:14.434 11897.305 - 11947.717: 3.2582% ( 10) 00:08:14.434 11947.717 - 11998.129: 3.3551% ( 8) 00:08:14.434 11998.129 - 12048.542: 3.4763% ( 10) 00:08:14.434 12048.542 - 12098.954: 3.6579% ( 15) 00:08:14.434 12098.954 - 12149.366: 3.7912% ( 11) 00:08:14.434 12149.366 - 12199.778: 3.9608% ( 14) 00:08:14.434 12199.778 - 12250.191: 4.1182% ( 13) 00:08:14.434 12250.191 - 12300.603: 4.2151% ( 8) 00:08:14.434 12300.603 - 12351.015: 4.3120% ( 8) 00:08:14.434 12351.015 - 12401.428: 4.4816% ( 14) 00:08:14.434 12401.428 - 12451.840: 4.5906% ( 9) 00:08:14.434 12451.840 - 12502.252: 4.7359% ( 12) 00:08:14.434 12502.252 - 12552.665: 4.9297% ( 16) 00:08:14.434 12552.665 - 12603.077: 5.0751% ( 12) 00:08:14.434 12603.077 - 12653.489: 5.1841% ( 9) 00:08:14.434 12653.489 - 12703.902: 5.3052% ( 10) 00:08:14.434 12703.902 - 12754.314: 5.4506% ( 12) 00:08:14.434 12754.314 - 12804.726: 5.5838% ( 11) 00:08:14.434 12804.726 - 12855.138: 5.7655% ( 15) 00:08:14.434 12855.138 - 12905.551: 5.9230% ( 13) 00:08:14.434 12905.551 - 13006.375: 6.3590% ( 36) 00:08:14.434 13006.375 - 13107.200: 6.8920% ( 44) 00:08:14.434 13107.200 - 13208.025: 7.4976% ( 50) 00:08:14.434 13208.025 - 13308.849: 8.4423% ( 78) 00:08:14.434 13308.849 - 13409.674: 9.6657% ( 101) 00:08:14.434 13409.674 - 13510.498: 10.9012% ( 102) 00:08:14.434 13510.498 - 13611.323: 12.2820% ( 114) 00:08:14.434 13611.323 - 13712.148: 13.8929% ( 133) 00:08:14.434 13712.148 - 13812.972: 15.4918% ( 132) 00:08:14.434 13812.972 - 13913.797: 17.2359% ( 144) 00:08:14.434 13913.797 - 14014.622: 18.9801% ( 144) 00:08:14.434 14014.622 - 14115.446: 20.6880% ( 141) 00:08:14.434 14115.446 - 14216.271: 22.6139% ( 159) 00:08:14.434 14216.271 - 14317.095: 24.3338% ( 142) 00:08:14.434 14317.095 - 14417.920: 26.0174% ( 139) 00:08:14.434 14417.920 - 14518.745: 28.1371% ( 175) 00:08:14.434 14518.745 - 14619.569: 30.2326% ( 173) 00:08:14.434 14619.569 - 14720.394: 32.2311% ( 165) 00:08:14.434 14720.394 - 14821.218: 34.5203% ( 189) 00:08:14.434 14821.218 - 14922.043: 36.7006% ( 180) 00:08:14.434 14922.043 - 15022.868: 38.9293% ( 184) 00:08:14.434 15022.868 - 15123.692: 41.3760% ( 202) 00:08:14.434 15123.692 - 15224.517: 44.1618% ( 230) 00:08:14.434 15224.517 - 15325.342: 46.9719% ( 232) 00:08:14.434 15325.342 - 15426.166: 49.6609% ( 222) 00:08:14.434 15426.166 - 15526.991: 52.2650% ( 215) 00:08:14.434 15526.991 - 15627.815: 54.9661% ( 223) 00:08:14.434 15627.815 - 15728.640: 57.8004% ( 234) 00:08:14.434 15728.640 - 15829.465: 60.7074% ( 240) 00:08:14.434 15829.465 - 15930.289: 63.6991% ( 247) 00:08:14.434 15930.289 - 16031.114: 66.4971% ( 231) 00:08:14.434 16031.114 - 16131.938: 69.2951% ( 231) 00:08:14.434 16131.938 - 16232.763: 71.4874% ( 181) 00:08:14.434 16232.763 - 16333.588: 73.5828% ( 173) 00:08:14.434 16333.588 - 16434.412: 75.7631% ( 180) 00:08:14.434 16434.412 - 16535.237: 78.0402% ( 188) 00:08:14.434 16535.237 - 16636.062: 80.1235% ( 172) 00:08:14.434 16636.062 - 16736.886: 81.9646% ( 152) 00:08:14.434 16736.886 - 16837.711: 83.3454% ( 114) 00:08:14.434 16837.711 - 16938.535: 84.8716% ( 126) 00:08:14.434 16938.535 - 17039.360: 86.1313% ( 104) 00:08:14.434 17039.360 - 17140.185: 87.2699% ( 94) 00:08:14.434 17140.185 - 17241.009: 88.2267% ( 79) 00:08:14.434 17241.009 - 17341.834: 89.0988% ( 72) 00:08:14.434 17341.834 - 17442.658: 89.6076% ( 42) 00:08:14.434 17442.658 - 17543.483: 90.1163% ( 42) 00:08:14.434 17543.483 - 17644.308: 90.5523% ( 36) 00:08:14.434 17644.308 - 17745.132: 91.0368% ( 40) 00:08:14.434 17745.132 - 17845.957: 91.5819% ( 45) 00:08:14.434 17845.957 - 17946.782: 92.0543% ( 39) 00:08:14.434 17946.782 - 18047.606: 92.6235% ( 47) 00:08:14.434 18047.606 - 18148.431: 93.1807% ( 46) 00:08:14.434 18148.431 - 18249.255: 93.8106% ( 52) 00:08:14.434 18249.255 - 18350.080: 94.3193% ( 42) 00:08:14.434 18350.080 - 18450.905: 94.7432% ( 35) 00:08:14.434 18450.905 - 18551.729: 95.1429% ( 33) 00:08:14.434 18551.729 - 18652.554: 95.5426% ( 33) 00:08:14.434 18652.554 - 18753.378: 95.9181% ( 31) 00:08:14.434 18753.378 - 18854.203: 96.3421% ( 35) 00:08:14.434 18854.203 - 18955.028: 96.7418% ( 33) 00:08:14.434 18955.028 - 19055.852: 97.1415% ( 33) 00:08:14.434 19055.852 - 19156.677: 97.4322% ( 24) 00:08:14.434 19156.677 - 19257.502: 97.6744% ( 20) 00:08:14.434 19257.502 - 19358.326: 97.8924% ( 18) 00:08:14.434 19358.326 - 19459.151: 98.0862% ( 16) 00:08:14.434 19459.151 - 19559.975: 98.2195% ( 11) 00:08:14.434 19559.975 - 19660.800: 98.3406% ( 10) 00:08:14.434 19660.800 - 19761.625: 98.3891% ( 4) 00:08:14.434 19761.625 - 19862.449: 98.4375% ( 4) 00:08:14.434 19862.449 - 19963.274: 98.4496% ( 1) 00:08:14.434 22988.012 - 23088.837: 98.4738% ( 2) 00:08:14.434 23088.837 - 23189.662: 98.5102% ( 3) 00:08:14.434 23189.662 - 23290.486: 98.5465% ( 3) 00:08:14.434 23290.486 - 23391.311: 98.5828% ( 3) 00:08:14.434 23391.311 - 23492.135: 98.6192% ( 3) 00:08:14.434 23492.135 - 23592.960: 98.6555% ( 3) 00:08:14.434 23592.960 - 23693.785: 98.7040% ( 4) 00:08:14.434 23693.785 - 23794.609: 98.7403% ( 3) 00:08:14.434 23794.609 - 23895.434: 98.7888% ( 4) 00:08:14.434 23895.434 - 23996.258: 98.8251% ( 3) 00:08:14.434 23996.258 - 24097.083: 98.8735% ( 4) 00:08:14.434 24097.083 - 24197.908: 98.9099% ( 3) 00:08:14.434 24197.908 - 24298.732: 98.9462% ( 3) 00:08:14.434 24298.732 - 24399.557: 98.9826% ( 3) 00:08:14.434 24399.557 - 24500.382: 99.0310% ( 4) 00:08:14.434 24500.382 - 24601.206: 99.0673% ( 3) 00:08:14.434 24601.206 - 24702.031: 99.1158% ( 4) 00:08:14.434 24702.031 - 24802.855: 99.1521% ( 3) 00:08:14.434 24802.855 - 24903.680: 99.2006% ( 4) 00:08:14.434 24903.680 - 25004.505: 99.2248% ( 2) 00:08:14.434 30852.332 - 31053.982: 99.3459% ( 10) 00:08:14.434 31255.631 - 31457.280: 99.4065% ( 5) 00:08:14.434 31457.280 - 31658.929: 99.5397% ( 11) 00:08:14.434 31658.929 - 31860.578: 99.6366% ( 8) 00:08:14.434 31860.578 - 32062.228: 99.7820% ( 12) 00:08:14.434 32062.228 - 32263.877: 99.9273% ( 12) 00:08:14.434 32263.877 - 32465.526: 100.0000% ( 6) 00:08:14.434 00:08:14.434 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:14.435 ============================================================================== 00:08:14.435 Range in us Cumulative IO count 00:08:14.435 4763.963 - 4789.169: 0.0242% ( 2) 00:08:14.435 4789.169 - 4814.375: 0.0484% ( 2) 00:08:14.435 4814.375 - 4839.582: 0.0727% ( 2) 00:08:14.435 4839.582 - 4864.788: 0.1090% ( 3) 00:08:14.435 4864.788 - 4889.994: 0.1453% ( 3) 00:08:14.435 4889.994 - 4915.200: 0.1696% ( 2) 00:08:14.435 4915.200 - 4940.406: 0.1938% ( 2) 00:08:14.435 4940.406 - 4965.612: 0.2180% ( 2) 00:08:14.435 4965.612 - 4990.818: 0.2422% ( 2) 00:08:14.435 4990.818 - 5016.025: 0.2786% ( 3) 00:08:14.435 5016.025 - 5041.231: 0.3028% ( 2) 00:08:14.435 5041.231 - 5066.437: 0.3270% ( 2) 00:08:14.435 5066.437 - 5091.643: 0.3513% ( 2) 00:08:14.435 5091.643 - 5116.849: 0.3755% ( 2) 00:08:14.435 5116.849 - 5142.055: 0.3997% ( 2) 00:08:14.435 5142.055 - 5167.262: 0.4360% ( 3) 00:08:14.435 5167.262 - 5192.468: 0.4603% ( 2) 00:08:14.435 5192.468 - 5217.674: 0.4845% ( 2) 00:08:14.435 5217.674 - 5242.880: 0.5087% ( 2) 00:08:14.435 5242.880 - 5268.086: 0.5329% ( 2) 00:08:14.435 5268.086 - 5293.292: 0.5572% ( 2) 00:08:14.435 5293.292 - 5318.498: 0.5814% ( 2) 00:08:14.435 5318.498 - 5343.705: 0.6056% ( 2) 00:08:14.435 5343.705 - 5368.911: 0.6298% ( 2) 00:08:14.435 5368.911 - 5394.117: 0.6541% ( 2) 00:08:14.435 5394.117 - 5419.323: 0.6783% ( 2) 00:08:14.435 5419.323 - 5444.529: 0.7025% ( 2) 00:08:14.435 5444.529 - 5469.735: 0.7267% ( 2) 00:08:14.435 5469.735 - 5494.942: 0.7510% ( 2) 00:08:14.435 5494.942 - 5520.148: 0.7752% ( 2) 00:08:14.435 9679.163 - 9729.575: 0.8842% ( 9) 00:08:14.435 9729.575 - 9779.988: 1.0417% ( 13) 00:08:14.435 9779.988 - 9830.400: 1.0901% ( 4) 00:08:14.435 9830.400 - 9880.812: 1.1386% ( 4) 00:08:14.435 9880.812 - 9931.225: 1.1991% ( 5) 00:08:14.435 9931.225 - 9981.637: 1.2355% ( 3) 00:08:14.435 9981.637 - 10032.049: 1.2597% ( 2) 00:08:14.435 10032.049 - 10082.462: 1.3081% ( 4) 00:08:14.435 10082.462 - 10132.874: 1.3687% ( 5) 00:08:14.435 10132.874 - 10183.286: 1.4172% ( 4) 00:08:14.435 10183.286 - 10233.698: 1.4656% ( 4) 00:08:14.435 10233.698 - 10284.111: 1.5141% ( 4) 00:08:14.435 10284.111 - 10334.523: 1.5504% ( 3) 00:08:14.435 10687.409 - 10737.822: 1.5746% ( 2) 00:08:14.435 10737.822 - 10788.234: 1.6109% ( 3) 00:08:14.435 10788.234 - 10838.646: 1.6594% ( 4) 00:08:14.435 10838.646 - 10889.058: 1.7442% ( 7) 00:08:14.435 10889.058 - 10939.471: 1.8290% ( 7) 00:08:14.435 10939.471 - 10989.883: 1.8653% ( 3) 00:08:14.435 10989.883 - 11040.295: 1.9138% ( 4) 00:08:14.435 11040.295 - 11090.708: 1.9622% ( 4) 00:08:14.435 11090.708 - 11141.120: 1.9985% ( 3) 00:08:14.435 11141.120 - 11191.532: 2.0228% ( 2) 00:08:14.435 11191.532 - 11241.945: 2.0712% ( 4) 00:08:14.435 11241.945 - 11292.357: 2.1197% ( 4) 00:08:14.435 11292.357 - 11342.769: 2.1560% ( 3) 00:08:14.435 11342.769 - 11393.182: 2.2045% ( 4) 00:08:14.435 11393.182 - 11443.594: 2.2408% ( 3) 00:08:14.435 11443.594 - 11494.006: 2.2771% ( 3) 00:08:14.435 11494.006 - 11544.418: 2.3135% ( 3) 00:08:14.435 11544.418 - 11594.831: 2.3256% ( 1) 00:08:14.435 11846.892 - 11897.305: 2.3377% ( 1) 00:08:14.435 11897.305 - 11947.717: 2.3983% ( 5) 00:08:14.435 11947.717 - 11998.129: 2.4709% ( 6) 00:08:14.435 11998.129 - 12048.542: 2.5436% ( 6) 00:08:14.435 12048.542 - 12098.954: 2.6647% ( 10) 00:08:14.435 12098.954 - 12149.366: 2.7859% ( 10) 00:08:14.435 12149.366 - 12199.778: 2.9070% ( 10) 00:08:14.435 12199.778 - 12250.191: 3.0644% ( 13) 00:08:14.435 12250.191 - 12300.603: 3.2098% ( 12) 00:08:14.435 12300.603 - 12351.015: 3.4036% ( 16) 00:08:14.435 12351.015 - 12401.428: 3.5610% ( 13) 00:08:14.435 12401.428 - 12451.840: 3.7064% ( 12) 00:08:14.435 12451.840 - 12502.252: 3.9244% ( 18) 00:08:14.435 12502.252 - 12552.665: 4.1061% ( 15) 00:08:14.435 12552.665 - 12603.077: 4.2999% ( 16) 00:08:14.435 12603.077 - 12653.489: 4.4695% ( 14) 00:08:14.435 12653.489 - 12703.902: 4.6996% ( 19) 00:08:14.435 12703.902 - 12754.314: 5.0024% ( 25) 00:08:14.435 12754.314 - 12804.726: 5.2931% ( 24) 00:08:14.435 12804.726 - 12855.138: 5.5717% ( 23) 00:08:14.435 12855.138 - 12905.551: 5.8866% ( 26) 00:08:14.435 12905.551 - 13006.375: 6.4922% ( 50) 00:08:14.435 13006.375 - 13107.200: 7.0252% ( 44) 00:08:14.435 13107.200 - 13208.025: 7.6187% ( 49) 00:08:14.435 13208.025 - 13308.849: 8.4908% ( 72) 00:08:14.435 13308.849 - 13409.674: 9.5567% ( 88) 00:08:14.435 13409.674 - 13510.498: 10.8891% ( 110) 00:08:14.435 13510.498 - 13611.323: 12.6453% ( 145) 00:08:14.435 13611.323 - 13712.148: 14.2805% ( 135) 00:08:14.435 13712.148 - 13812.972: 16.3275% ( 169) 00:08:14.435 13812.972 - 13913.797: 18.3140% ( 164) 00:08:14.435 13913.797 - 14014.622: 20.2883% ( 163) 00:08:14.435 14014.622 - 14115.446: 22.2263% ( 160) 00:08:14.435 14115.446 - 14216.271: 24.2248% ( 165) 00:08:14.435 14216.271 - 14317.095: 26.3081% ( 172) 00:08:14.435 14317.095 - 14417.920: 28.3309% ( 167) 00:08:14.435 14417.920 - 14518.745: 30.1841% ( 153) 00:08:14.435 14518.745 - 14619.569: 31.8193% ( 135) 00:08:14.435 14619.569 - 14720.394: 33.7330% ( 158) 00:08:14.435 14720.394 - 14821.218: 35.9617% ( 184) 00:08:14.435 14821.218 - 14922.043: 37.9482% ( 164) 00:08:14.435 14922.043 - 15022.868: 40.0073% ( 170) 00:08:14.435 15022.868 - 15123.692: 42.0422% ( 168) 00:08:14.435 15123.692 - 15224.517: 44.3677% ( 192) 00:08:14.435 15224.517 - 15325.342: 46.9477% ( 213) 00:08:14.435 15325.342 - 15426.166: 49.6609% ( 224) 00:08:14.435 15426.166 - 15526.991: 52.3740% ( 224) 00:08:14.435 15526.991 - 15627.815: 55.2204% ( 235) 00:08:14.435 15627.815 - 15728.640: 58.0184% ( 231) 00:08:14.435 15728.640 - 15829.465: 60.6105% ( 214) 00:08:14.435 15829.465 - 15930.289: 63.3600% ( 227) 00:08:14.435 15930.289 - 16031.114: 66.0732% ( 224) 00:08:14.435 16031.114 - 16131.938: 68.5683% ( 206) 00:08:14.435 16131.938 - 16232.763: 70.6516% ( 172) 00:08:14.435 16232.763 - 16333.588: 72.6865% ( 168) 00:08:14.435 16333.588 - 16434.412: 74.6851% ( 165) 00:08:14.435 16434.412 - 16535.237: 76.3687% ( 139) 00:08:14.435 16535.237 - 16636.062: 77.9554% ( 131) 00:08:14.435 16636.062 - 16736.886: 79.6027% ( 136) 00:08:14.435 16736.886 - 16837.711: 81.1531% ( 128) 00:08:14.435 16837.711 - 16938.535: 82.7156% ( 129) 00:08:14.435 16938.535 - 17039.360: 84.0843% ( 113) 00:08:14.435 17039.360 - 17140.185: 85.2713% ( 98) 00:08:14.435 17140.185 - 17241.009: 86.6279% ( 112) 00:08:14.435 17241.009 - 17341.834: 87.9239% ( 107) 00:08:14.435 17341.834 - 17442.658: 89.2442% ( 109) 00:08:14.435 17442.658 - 17543.483: 90.3343% ( 90) 00:08:14.435 17543.483 - 17644.308: 91.3154% ( 81) 00:08:14.435 17644.308 - 17745.132: 92.1391% ( 68) 00:08:14.435 17745.132 - 17845.957: 92.9264% ( 65) 00:08:14.435 17845.957 - 17946.782: 93.7500% ( 68) 00:08:14.435 17946.782 - 18047.606: 94.3798% ( 52) 00:08:14.435 18047.606 - 18148.431: 94.8765% ( 41) 00:08:14.435 18148.431 - 18249.255: 95.2398% ( 30) 00:08:14.435 18249.255 - 18350.080: 95.5305% ( 24) 00:08:14.435 18350.080 - 18450.905: 95.6516% ( 10) 00:08:14.435 18450.905 - 18551.729: 95.8091% ( 13) 00:08:14.435 18551.729 - 18652.554: 95.9545% ( 12) 00:08:14.435 18652.554 - 18753.378: 96.1967% ( 20) 00:08:14.435 18753.378 - 18854.203: 96.4511% ( 21) 00:08:14.435 18854.203 - 18955.028: 96.6812% ( 19) 00:08:14.435 18955.028 - 19055.852: 96.9719% ( 24) 00:08:14.435 19055.852 - 19156.677: 97.2626% ( 24) 00:08:14.435 19156.677 - 19257.502: 97.5048% ( 20) 00:08:14.435 19257.502 - 19358.326: 97.7229% ( 18) 00:08:14.435 19358.326 - 19459.151: 97.9409% ( 18) 00:08:14.435 19459.151 - 19559.975: 98.0984% ( 13) 00:08:14.435 19559.975 - 19660.800: 98.2316% ( 11) 00:08:14.435 19660.800 - 19761.625: 98.3648% ( 11) 00:08:14.435 19761.625 - 19862.449: 98.4375% ( 6) 00:08:14.435 19862.449 - 19963.274: 98.4496% ( 1) 00:08:14.435 23189.662 - 23290.486: 98.4738% ( 2) 00:08:14.435 23290.486 - 23391.311: 98.5102% ( 3) 00:08:14.435 23391.311 - 23492.135: 98.5465% ( 3) 00:08:14.435 23492.135 - 23592.960: 98.5950% ( 4) 00:08:14.435 23592.960 - 23693.785: 98.6192% ( 2) 00:08:14.435 23693.785 - 23794.609: 98.6676% ( 4) 00:08:14.435 23794.609 - 23895.434: 98.7040% ( 3) 00:08:14.435 23895.434 - 23996.258: 98.7403% ( 3) 00:08:14.435 23996.258 - 24097.083: 98.7766% ( 3) 00:08:14.435 24097.083 - 24197.908: 98.8251% ( 4) 00:08:14.435 24197.908 - 24298.732: 98.8735% ( 4) 00:08:14.435 24298.732 - 24399.557: 98.9099% ( 3) 00:08:14.435 24399.557 - 24500.382: 98.9583% ( 4) 00:08:14.435 24500.382 - 24601.206: 98.9947% ( 3) 00:08:14.435 24601.206 - 24702.031: 99.0310% ( 3) 00:08:14.435 24702.031 - 24802.855: 99.0673% ( 3) 00:08:14.435 24802.855 - 24903.680: 99.1158% ( 4) 00:08:14.435 24903.680 - 25004.505: 99.1521% ( 3) 00:08:14.435 25004.505 - 25105.329: 99.1885% ( 3) 00:08:14.435 25105.329 - 25206.154: 99.2127% ( 2) 00:08:14.435 25206.154 - 25306.978: 99.2248% ( 1) 00:08:14.435 31255.631 - 31457.280: 99.2975% ( 6) 00:08:14.435 31457.280 - 31658.929: 99.4549% ( 13) 00:08:14.435 31658.929 - 31860.578: 99.6003% ( 12) 00:08:14.435 31860.578 - 32062.228: 99.7456% ( 12) 00:08:14.435 32062.228 - 32263.877: 99.8910% ( 12) 00:08:14.435 32263.877 - 32465.526: 100.0000% ( 9) 00:08:14.435 00:08:14.435 21:40:37 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:15.376 Initializing NVMe Controllers 00:08:15.376 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:15.376 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:15.376 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:15.376 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:15.376 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:15.376 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:15.376 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:15.376 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:15.376 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:15.376 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:15.376 Initialization complete. Launching workers. 00:08:15.376 ======================================================== 00:08:15.376 Latency(us) 00:08:15.376 Device Information : IOPS MiB/s Average min max 00:08:15.376 PCIE (0000:00:10.0) NSID 1 from core 0: 10111.22 118.49 12666.12 7842.25 30528.55 00:08:15.376 PCIE (0000:00:13.0) NSID 1 from core 0: 10111.22 118.49 12660.44 7148.97 29636.18 00:08:15.376 PCIE (0000:00:11.0) NSID 1 from core 0: 10111.22 118.49 12650.01 5948.20 30380.11 00:08:15.376 PCIE (0000:00:12.0) NSID 1 from core 0: 10111.22 118.49 12639.06 5251.89 30741.11 00:08:15.376 PCIE (0000:00:12.0) NSID 2 from core 0: 10111.22 118.49 12628.05 4614.88 30949.67 00:08:15.376 PCIE (0000:00:12.0) NSID 3 from core 0: 10175.22 119.24 12537.28 3859.90 23557.23 00:08:15.376 ======================================================== 00:08:15.376 Total : 60731.32 711.70 12630.06 3859.90 30949.67 00:08:15.376 00:08:15.376 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:15.376 ================================================================================= 00:08:15.376 1.00000% : 9527.926us 00:08:15.376 10.00000% : 10586.585us 00:08:15.376 25.00000% : 11342.769us 00:08:15.376 50.00000% : 12300.603us 00:08:15.376 75.00000% : 13712.148us 00:08:15.376 90.00000% : 14821.218us 00:08:15.376 95.00000% : 15829.465us 00:08:15.376 98.00000% : 17341.834us 00:08:15.376 99.00000% : 22786.363us 00:08:15.376 99.50000% : 29440.788us 00:08:15.376 99.90000% : 30449.034us 00:08:15.376 99.99000% : 30650.683us 00:08:15.376 99.99900% : 30650.683us 00:08:15.376 99.99990% : 30650.683us 00:08:15.376 99.99999% : 30650.683us 00:08:15.376 00:08:15.376 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:15.376 ================================================================================= 00:08:15.376 1.00000% : 9578.338us 00:08:15.376 10.00000% : 10636.997us 00:08:15.376 25.00000% : 11292.357us 00:08:15.376 50.00000% : 12250.191us 00:08:15.376 75.00000% : 13812.972us 00:08:15.376 90.00000% : 14922.043us 00:08:15.376 95.00000% : 15728.640us 00:08:15.376 98.00000% : 17140.185us 00:08:15.376 99.00000% : 22282.240us 00:08:15.376 99.50000% : 29037.489us 00:08:15.376 99.90000% : 29642.437us 00:08:15.376 99.99000% : 29642.437us 00:08:15.376 99.99900% : 29642.437us 00:08:15.376 99.99990% : 29642.437us 00:08:15.376 99.99999% : 29642.437us 00:08:15.376 00:08:15.376 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:15.376 ================================================================================= 00:08:15.376 1.00000% : 9527.926us 00:08:15.376 10.00000% : 10586.585us 00:08:15.376 25.00000% : 11292.357us 00:08:15.376 50.00000% : 12300.603us 00:08:15.376 75.00000% : 13712.148us 00:08:15.376 90.00000% : 15123.692us 00:08:15.376 95.00000% : 16031.114us 00:08:15.376 98.00000% : 17039.360us 00:08:15.376 99.00000% : 22988.012us 00:08:15.376 99.50000% : 29642.437us 00:08:15.376 99.90000% : 30247.385us 00:08:15.376 99.99000% : 30449.034us 00:08:15.376 99.99900% : 30449.034us 00:08:15.376 99.99990% : 30449.034us 00:08:15.376 99.99999% : 30449.034us 00:08:15.376 00:08:15.376 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:15.376 ================================================================================= 00:08:15.376 1.00000% : 9527.926us 00:08:15.376 10.00000% : 10636.997us 00:08:15.376 25.00000% : 11241.945us 00:08:15.376 50.00000% : 12300.603us 00:08:15.376 75.00000% : 13712.148us 00:08:15.376 90.00000% : 15022.868us 00:08:15.376 95.00000% : 15627.815us 00:08:15.376 98.00000% : 16736.886us 00:08:15.376 99.00000% : 22685.538us 00:08:15.376 99.50000% : 30045.735us 00:08:15.376 99.90000% : 30650.683us 00:08:15.376 99.99000% : 30852.332us 00:08:15.376 99.99900% : 30852.332us 00:08:15.376 99.99990% : 30852.332us 00:08:15.376 99.99999% : 30852.332us 00:08:15.376 00:08:15.376 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:15.376 ================================================================================= 00:08:15.376 1.00000% : 9124.628us 00:08:15.376 10.00000% : 10636.997us 00:08:15.376 25.00000% : 11241.945us 00:08:15.376 50.00000% : 12300.603us 00:08:15.376 75.00000% : 13611.323us 00:08:15.376 90.00000% : 15022.868us 00:08:15.376 95.00000% : 15627.815us 00:08:15.639 98.00000% : 16636.062us 00:08:15.639 99.00000% : 23088.837us 00:08:15.639 99.50000% : 30247.385us 00:08:15.639 99.90000% : 30852.332us 00:08:15.639 99.99000% : 31053.982us 00:08:15.639 99.99900% : 31053.982us 00:08:15.639 99.99990% : 31053.982us 00:08:15.639 99.99999% : 31053.982us 00:08:15.639 00:08:15.639 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:15.639 ================================================================================= 00:08:15.639 1.00000% : 8670.917us 00:08:15.639 10.00000% : 10636.997us 00:08:15.639 25.00000% : 11241.945us 00:08:15.639 50.00000% : 12300.603us 00:08:15.639 75.00000% : 13712.148us 00:08:15.639 90.00000% : 14821.218us 00:08:15.639 95.00000% : 15728.640us 00:08:15.639 98.00000% : 16837.711us 00:08:15.639 99.00000% : 17442.658us 00:08:15.639 99.50000% : 22786.363us 00:08:15.639 99.90000% : 23492.135us 00:08:15.639 99.99000% : 23592.960us 00:08:15.639 99.99900% : 23592.960us 00:08:15.639 99.99990% : 23592.960us 00:08:15.639 99.99999% : 23592.960us 00:08:15.639 00:08:15.639 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:15.639 ============================================================================== 00:08:15.639 Range in us Cumulative IO count 00:08:15.639 7813.908 - 7864.320: 0.0099% ( 1) 00:08:15.639 7864.320 - 7914.732: 0.0791% ( 7) 00:08:15.639 7914.732 - 7965.145: 0.1681% ( 9) 00:08:15.639 7965.145 - 8015.557: 0.2967% ( 13) 00:08:15.639 8015.557 - 8065.969: 0.3560% ( 6) 00:08:15.639 8065.969 - 8116.382: 0.3857% ( 3) 00:08:15.639 8116.382 - 8166.794: 0.4055% ( 2) 00:08:15.639 8166.794 - 8217.206: 0.4252% ( 2) 00:08:15.639 8217.206 - 8267.618: 0.5044% ( 8) 00:08:15.639 8267.618 - 8318.031: 0.5439% ( 4) 00:08:15.639 8318.031 - 8368.443: 0.5538% ( 1) 00:08:15.639 8368.443 - 8418.855: 0.5736% ( 2) 00:08:15.639 8519.680 - 8570.092: 0.6230% ( 5) 00:08:15.639 8570.092 - 8620.505: 0.6329% ( 1) 00:08:15.639 9275.865 - 9326.277: 0.6626% ( 3) 00:08:15.639 9326.277 - 9376.689: 0.6824% ( 2) 00:08:15.639 9376.689 - 9427.102: 0.7714% ( 9) 00:08:15.639 9427.102 - 9477.514: 0.8999% ( 13) 00:08:15.639 9477.514 - 9527.926: 1.0483% ( 15) 00:08:15.639 9527.926 - 9578.338: 1.1867% ( 14) 00:08:15.639 9578.338 - 9628.751: 1.3845% ( 20) 00:08:15.639 9628.751 - 9679.163: 1.5526% ( 17) 00:08:15.639 9679.163 - 9729.575: 1.8888% ( 34) 00:08:15.639 9729.575 - 9779.988: 2.0570% ( 17) 00:08:15.639 9779.988 - 9830.400: 2.2646% ( 21) 00:08:15.639 9830.400 - 9880.812: 2.4723% ( 21) 00:08:15.639 9880.812 - 9931.225: 2.7888% ( 32) 00:08:15.639 9931.225 - 9981.637: 3.0756% ( 29) 00:08:15.639 9981.637 - 10032.049: 3.3129% ( 24) 00:08:15.639 10032.049 - 10082.462: 3.9557% ( 65) 00:08:15.639 10082.462 - 10132.874: 4.4897% ( 54) 00:08:15.639 10132.874 - 10183.286: 4.9743% ( 49) 00:08:15.639 10183.286 - 10233.698: 5.5874% ( 62) 00:08:15.639 10233.698 - 10284.111: 6.0028% ( 42) 00:08:15.639 10284.111 - 10334.523: 6.5269% ( 53) 00:08:15.639 10334.523 - 10384.935: 6.9521% ( 43) 00:08:15.639 10384.935 - 10435.348: 7.4466% ( 50) 00:08:15.639 10435.348 - 10485.760: 8.5344% ( 110) 00:08:15.639 10485.760 - 10536.172: 9.6222% ( 110) 00:08:15.639 10536.172 - 10586.585: 10.8881% ( 128) 00:08:15.639 10586.585 - 10636.997: 12.0945% ( 122) 00:08:15.639 10636.997 - 10687.409: 13.1428% ( 106) 00:08:15.639 10687.409 - 10737.822: 14.1416% ( 101) 00:08:15.639 10737.822 - 10788.234: 15.1009% ( 97) 00:08:15.639 10788.234 - 10838.646: 15.8821% ( 79) 00:08:15.640 10838.646 - 10889.058: 16.6040% ( 73) 00:08:15.640 10889.058 - 10939.471: 17.4545% ( 86) 00:08:15.640 10939.471 - 10989.883: 18.1962% ( 75) 00:08:15.640 10989.883 - 11040.295: 19.2346% ( 105) 00:08:15.640 11040.295 - 11090.708: 20.3817% ( 116) 00:08:15.640 11090.708 - 11141.120: 21.6475% ( 128) 00:08:15.640 11141.120 - 11191.532: 22.9529% ( 132) 00:08:15.640 11191.532 - 11241.945: 23.9320% ( 99) 00:08:15.640 11241.945 - 11292.357: 24.8121% ( 89) 00:08:15.640 11292.357 - 11342.769: 25.9494% ( 115) 00:08:15.640 11342.769 - 11393.182: 27.0174% ( 108) 00:08:15.640 11393.182 - 11443.594: 28.1646% ( 116) 00:08:15.640 11443.594 - 11494.006: 29.4106% ( 126) 00:08:15.640 11494.006 - 11544.418: 30.3896% ( 99) 00:08:15.640 11544.418 - 11594.831: 31.7049% ( 133) 00:08:15.640 11594.831 - 11645.243: 33.0103% ( 132) 00:08:15.640 11645.243 - 11695.655: 34.5827% ( 159) 00:08:15.640 11695.655 - 11746.068: 36.1650% ( 160) 00:08:15.640 11746.068 - 11796.480: 37.7275% ( 158) 00:08:15.640 11796.480 - 11846.892: 39.1317% ( 142) 00:08:15.640 11846.892 - 11897.305: 40.4272% ( 131) 00:08:15.640 11897.305 - 11947.717: 41.3271% ( 91) 00:08:15.640 11947.717 - 11998.129: 42.5732% ( 126) 00:08:15.640 11998.129 - 12048.542: 43.7698% ( 121) 00:08:15.640 12048.542 - 12098.954: 45.3026% ( 155) 00:08:15.640 12098.954 - 12149.366: 46.9244% ( 164) 00:08:15.640 12149.366 - 12199.778: 48.3188% ( 141) 00:08:15.640 12199.778 - 12250.191: 49.5945% ( 129) 00:08:15.640 12250.191 - 12300.603: 50.6626% ( 108) 00:08:15.640 12300.603 - 12351.015: 51.7405% ( 109) 00:08:15.640 12351.015 - 12401.428: 52.9767% ( 125) 00:08:15.640 12401.428 - 12451.840: 53.7975% ( 83) 00:08:15.640 12451.840 - 12502.252: 55.0831% ( 130) 00:08:15.640 12502.252 - 12552.665: 56.0522% ( 98) 00:08:15.640 12552.665 - 12603.077: 57.2191% ( 118) 00:08:15.640 12603.077 - 12653.489: 58.2575% ( 105) 00:08:15.640 12653.489 - 12703.902: 59.0684% ( 82) 00:08:15.640 12703.902 - 12754.314: 60.0475% ( 99) 00:08:15.640 12754.314 - 12804.726: 61.0166% ( 98) 00:08:15.640 12804.726 - 12855.138: 61.7979% ( 79) 00:08:15.640 12855.138 - 12905.551: 62.5198% ( 73) 00:08:15.640 12905.551 - 13006.375: 64.4482% ( 195) 00:08:15.640 13006.375 - 13107.200: 66.2184% ( 179) 00:08:15.640 13107.200 - 13208.025: 67.6622% ( 146) 00:08:15.640 13208.025 - 13308.849: 69.2247% ( 158) 00:08:15.640 13308.849 - 13409.674: 70.9751% ( 177) 00:08:15.640 13409.674 - 13510.498: 72.6068% ( 165) 00:08:15.640 13510.498 - 13611.323: 74.0803% ( 149) 00:08:15.640 13611.323 - 13712.148: 75.8307% ( 177) 00:08:15.640 13712.148 - 13812.972: 77.6800% ( 187) 00:08:15.640 13812.972 - 13913.797: 79.1831% ( 152) 00:08:15.640 13913.797 - 14014.622: 80.4094% ( 124) 00:08:15.640 14014.622 - 14115.446: 81.8137% ( 142) 00:08:15.640 14115.446 - 14216.271: 83.2773% ( 148) 00:08:15.640 14216.271 - 14317.095: 84.5629% ( 130) 00:08:15.640 14317.095 - 14417.920: 85.8584% ( 131) 00:08:15.640 14417.920 - 14518.745: 87.1143% ( 127) 00:08:15.640 14518.745 - 14619.569: 88.3109% ( 121) 00:08:15.640 14619.569 - 14720.394: 89.2108% ( 91) 00:08:15.640 14720.394 - 14821.218: 90.0415% ( 84) 00:08:15.640 14821.218 - 14922.043: 90.8030% ( 77) 00:08:15.640 14922.043 - 15022.868: 91.5150% ( 72) 00:08:15.640 15022.868 - 15123.692: 92.2666% ( 76) 00:08:15.640 15123.692 - 15224.517: 92.8402% ( 58) 00:08:15.640 15224.517 - 15325.342: 93.2753% ( 44) 00:08:15.640 15325.342 - 15426.166: 93.7203% ( 45) 00:08:15.640 15426.166 - 15526.991: 94.0566% ( 34) 00:08:15.640 15526.991 - 15627.815: 94.4324% ( 38) 00:08:15.640 15627.815 - 15728.640: 94.7983% ( 37) 00:08:15.640 15728.640 - 15829.465: 95.0850% ( 29) 00:08:15.640 15829.465 - 15930.289: 95.3619% ( 28) 00:08:15.640 15930.289 - 16031.114: 95.5301% ( 17) 00:08:15.640 16031.114 - 16131.938: 95.7674% ( 24) 00:08:15.640 16131.938 - 16232.763: 95.9454% ( 18) 00:08:15.640 16232.763 - 16333.588: 96.1432% ( 20) 00:08:15.640 16333.588 - 16434.412: 96.6475% ( 51) 00:08:15.640 16434.412 - 16535.237: 96.9244% ( 28) 00:08:15.640 16535.237 - 16636.062: 97.1321% ( 21) 00:08:15.640 16636.062 - 16736.886: 97.3299% ( 20) 00:08:15.640 16736.886 - 16837.711: 97.4288% ( 10) 00:08:15.640 16837.711 - 16938.535: 97.4782% ( 5) 00:08:15.640 16938.535 - 17039.360: 97.6167% ( 14) 00:08:15.640 17039.360 - 17140.185: 97.7057% ( 9) 00:08:15.640 17140.185 - 17241.009: 97.8244% ( 12) 00:08:15.640 17241.009 - 17341.834: 98.0914% ( 27) 00:08:15.640 17341.834 - 17442.658: 98.2199% ( 13) 00:08:15.640 17442.658 - 17543.483: 98.3386% ( 12) 00:08:15.640 17543.483 - 17644.308: 98.4573% ( 12) 00:08:15.640 17644.308 - 17745.132: 98.5067% ( 5) 00:08:15.640 17745.132 - 17845.957: 98.5463% ( 4) 00:08:15.640 17845.957 - 17946.782: 98.5858% ( 4) 00:08:15.640 17946.782 - 18047.606: 98.6946% ( 11) 00:08:15.640 18047.606 - 18148.431: 98.7342% ( 4) 00:08:15.640 22181.415 - 22282.240: 98.7441% ( 1) 00:08:15.640 22282.240 - 22383.065: 98.8034% ( 6) 00:08:15.640 22383.065 - 22483.889: 98.8627% ( 6) 00:08:15.640 22483.889 - 22584.714: 98.9221% ( 6) 00:08:15.640 22584.714 - 22685.538: 98.9814% ( 6) 00:08:15.640 22685.538 - 22786.363: 99.0407% ( 6) 00:08:15.640 22786.363 - 22887.188: 99.0902% ( 5) 00:08:15.640 22887.188 - 22988.012: 99.1495% ( 6) 00:08:15.640 22988.012 - 23088.837: 99.1891% ( 4) 00:08:15.640 23088.837 - 23189.662: 99.2484% ( 6) 00:08:15.640 23189.662 - 23290.486: 99.3078% ( 6) 00:08:15.640 23290.486 - 23391.311: 99.3572% ( 5) 00:08:15.640 23391.311 - 23492.135: 99.3671% ( 1) 00:08:15.640 29037.489 - 29239.138: 99.4066% ( 4) 00:08:15.640 29239.138 - 29440.788: 99.5055% ( 10) 00:08:15.640 29440.788 - 29642.437: 99.5550% ( 5) 00:08:15.640 29642.437 - 29844.086: 99.6143% ( 6) 00:08:15.640 29844.086 - 30045.735: 99.7330% ( 12) 00:08:15.640 30045.735 - 30247.385: 99.8418% ( 11) 00:08:15.640 30247.385 - 30449.034: 99.9604% ( 12) 00:08:15.640 30449.034 - 30650.683: 100.0000% ( 4) 00:08:15.640 00:08:15.640 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:15.640 ============================================================================== 00:08:15.640 Range in us Cumulative IO count 00:08:15.640 7108.135 - 7158.548: 0.0099% ( 1) 00:08:15.640 7158.548 - 7208.960: 0.0494% ( 4) 00:08:15.640 7208.960 - 7259.372: 0.0791% ( 3) 00:08:15.640 7259.372 - 7309.785: 0.1483% ( 7) 00:08:15.640 7309.785 - 7360.197: 0.2670% ( 12) 00:08:15.640 7360.197 - 7410.609: 0.3758% ( 11) 00:08:15.640 7410.609 - 7461.022: 0.4351% ( 6) 00:08:15.640 7461.022 - 7511.434: 0.4648% ( 3) 00:08:15.640 7511.434 - 7561.846: 0.5044% ( 4) 00:08:15.640 7561.846 - 7612.258: 0.5439% ( 4) 00:08:15.640 7612.258 - 7662.671: 0.5835% ( 4) 00:08:15.640 7662.671 - 7713.083: 0.6230% ( 4) 00:08:15.640 7713.083 - 7763.495: 0.6329% ( 1) 00:08:15.640 9326.277 - 9376.689: 0.6527% ( 2) 00:08:15.640 9376.689 - 9427.102: 0.7120% ( 6) 00:08:15.640 9427.102 - 9477.514: 0.8010% ( 9) 00:08:15.640 9477.514 - 9527.926: 0.9691% ( 17) 00:08:15.640 9527.926 - 9578.338: 1.1076% ( 14) 00:08:15.640 9578.338 - 9628.751: 1.3449% ( 24) 00:08:15.640 9628.751 - 9679.163: 1.5724% ( 23) 00:08:15.640 9679.163 - 9729.575: 1.8592% ( 29) 00:08:15.640 9729.575 - 9779.988: 2.3438% ( 49) 00:08:15.640 9779.988 - 9830.400: 2.6305% ( 29) 00:08:15.640 9830.400 - 9880.812: 2.8481% ( 22) 00:08:15.640 9880.812 - 9931.225: 3.0756% ( 23) 00:08:15.640 9931.225 - 9981.637: 3.4118% ( 34) 00:08:15.640 9981.637 - 10032.049: 3.7381% ( 33) 00:08:15.640 10032.049 - 10082.462: 4.0447% ( 31) 00:08:15.640 10082.462 - 10132.874: 4.3908% ( 35) 00:08:15.640 10132.874 - 10183.286: 4.7963% ( 41) 00:08:15.640 10183.286 - 10233.698: 5.2116% ( 42) 00:08:15.640 10233.698 - 10284.111: 5.6863% ( 48) 00:08:15.640 10284.111 - 10334.523: 6.0819% ( 40) 00:08:15.640 10334.523 - 10384.935: 6.5862% ( 51) 00:08:15.640 10384.935 - 10435.348: 7.3675% ( 79) 00:08:15.640 10435.348 - 10485.760: 8.0795% ( 72) 00:08:15.640 10485.760 - 10536.172: 8.7619% ( 69) 00:08:15.640 10536.172 - 10586.585: 9.5926% ( 84) 00:08:15.640 10586.585 - 10636.997: 10.6507% ( 107) 00:08:15.640 10636.997 - 10687.409: 11.8968% ( 126) 00:08:15.640 10687.409 - 10737.822: 13.2516% ( 137) 00:08:15.640 10737.822 - 10788.234: 14.2306% ( 99) 00:08:15.640 10788.234 - 10838.646: 15.1701% ( 95) 00:08:15.640 10838.646 - 10889.058: 16.1887% ( 103) 00:08:15.640 10889.058 - 10939.471: 17.3062% ( 113) 00:08:15.640 10939.471 - 10989.883: 18.5522% ( 126) 00:08:15.640 10989.883 - 11040.295: 19.6301% ( 109) 00:08:15.640 11040.295 - 11090.708: 20.6388% ( 102) 00:08:15.640 11090.708 - 11141.120: 21.5783% ( 95) 00:08:15.640 11141.120 - 11191.532: 22.8145% ( 125) 00:08:15.640 11191.532 - 11241.945: 24.1396% ( 134) 00:08:15.640 11241.945 - 11292.357: 25.5439% ( 142) 00:08:15.640 11292.357 - 11342.769: 26.7603% ( 123) 00:08:15.640 11342.769 - 11393.182: 28.1942% ( 145) 00:08:15.640 11393.182 - 11443.594: 29.5688% ( 139) 00:08:15.640 11443.594 - 11494.006: 30.7061% ( 115) 00:08:15.640 11494.006 - 11544.418: 31.9917% ( 130) 00:08:15.640 11544.418 - 11594.831: 33.2278% ( 125) 00:08:15.640 11594.831 - 11645.243: 34.4838% ( 127) 00:08:15.640 11645.243 - 11695.655: 35.9474% ( 148) 00:08:15.640 11695.655 - 11746.068: 37.1934% ( 126) 00:08:15.640 11746.068 - 11796.480: 38.7460% ( 157) 00:08:15.640 11796.480 - 11846.892: 40.1404% ( 141) 00:08:15.640 11846.892 - 11897.305: 41.7326% ( 161) 00:08:15.640 11897.305 - 11947.717: 43.2852% ( 157) 00:08:15.640 11947.717 - 11998.129: 44.7093% ( 144) 00:08:15.640 11998.129 - 12048.542: 45.8861% ( 119) 00:08:15.640 12048.542 - 12098.954: 46.9047% ( 103) 00:08:15.640 12098.954 - 12149.366: 47.9628% ( 107) 00:08:15.640 12149.366 - 12199.778: 49.1792% ( 123) 00:08:15.641 12199.778 - 12250.191: 50.1879% ( 102) 00:08:15.641 12250.191 - 12300.603: 51.0779% ( 90) 00:08:15.641 12300.603 - 12351.015: 52.2350% ( 117) 00:08:15.641 12351.015 - 12401.428: 53.2338% ( 101) 00:08:15.641 12401.428 - 12451.840: 54.2623% ( 104) 00:08:15.641 12451.840 - 12502.252: 55.2611% ( 101) 00:08:15.641 12502.252 - 12552.665: 55.9632% ( 71) 00:08:15.641 12552.665 - 12603.077: 56.6060% ( 65) 00:08:15.641 12603.077 - 12653.489: 57.3774% ( 78) 00:08:15.641 12653.489 - 12703.902: 58.2674% ( 90) 00:08:15.641 12703.902 - 12754.314: 59.2959% ( 104) 00:08:15.641 12754.314 - 12804.726: 60.0672% ( 78) 00:08:15.641 12804.726 - 12855.138: 60.8089% ( 75) 00:08:15.641 12855.138 - 12905.551: 61.6792% ( 88) 00:08:15.641 12905.551 - 13006.375: 63.4691% ( 181) 00:08:15.641 13006.375 - 13107.200: 65.2097% ( 176) 00:08:15.641 13107.200 - 13208.025: 66.7722% ( 158) 00:08:15.641 13208.025 - 13308.849: 68.3643% ( 161) 00:08:15.641 13308.849 - 13409.674: 69.8675% ( 152) 00:08:15.641 13409.674 - 13510.498: 71.2718% ( 142) 00:08:15.641 13510.498 - 13611.323: 72.9925% ( 174) 00:08:15.641 13611.323 - 13712.148: 74.6242% ( 165) 00:08:15.641 13712.148 - 13812.972: 76.2955% ( 169) 00:08:15.641 13812.972 - 13913.797: 78.1250% ( 185) 00:08:15.641 13913.797 - 14014.622: 79.8161% ( 171) 00:08:15.641 14014.622 - 14115.446: 81.3786% ( 158) 00:08:15.641 14115.446 - 14216.271: 82.7729% ( 141) 00:08:15.641 14216.271 - 14317.095: 84.1673% ( 141) 00:08:15.641 14317.095 - 14417.920: 85.7694% ( 162) 00:08:15.641 14417.920 - 14518.745: 87.0748% ( 132) 00:08:15.641 14518.745 - 14619.569: 88.1032% ( 104) 00:08:15.641 14619.569 - 14720.394: 88.8548% ( 76) 00:08:15.641 14720.394 - 14821.218: 89.5273% ( 68) 00:08:15.641 14821.218 - 14922.043: 90.2987% ( 78) 00:08:15.641 14922.043 - 15022.868: 90.9513% ( 66) 00:08:15.641 15022.868 - 15123.692: 91.5941% ( 65) 00:08:15.641 15123.692 - 15224.517: 92.3952% ( 81) 00:08:15.641 15224.517 - 15325.342: 93.0380% ( 65) 00:08:15.641 15325.342 - 15426.166: 93.6017% ( 57) 00:08:15.641 15426.166 - 15526.991: 94.3038% ( 71) 00:08:15.641 15526.991 - 15627.815: 94.7587% ( 46) 00:08:15.641 15627.815 - 15728.640: 95.1147% ( 36) 00:08:15.641 15728.640 - 15829.465: 95.3422% ( 23) 00:08:15.641 15829.465 - 15930.289: 95.4707% ( 13) 00:08:15.641 15930.289 - 16031.114: 95.6092% ( 14) 00:08:15.641 16031.114 - 16131.938: 95.7674% ( 16) 00:08:15.641 16131.938 - 16232.763: 95.9553% ( 19) 00:08:15.641 16232.763 - 16333.588: 96.1234% ( 17) 00:08:15.641 16333.588 - 16434.412: 96.2718% ( 15) 00:08:15.641 16434.412 - 16535.237: 96.5190% ( 25) 00:08:15.641 16535.237 - 16636.062: 96.7267% ( 21) 00:08:15.641 16636.062 - 16736.886: 96.9640% ( 24) 00:08:15.641 16736.886 - 16837.711: 97.2805% ( 32) 00:08:15.641 16837.711 - 16938.535: 97.5870% ( 31) 00:08:15.641 16938.535 - 17039.360: 97.8639% ( 28) 00:08:15.641 17039.360 - 17140.185: 98.1705% ( 31) 00:08:15.641 17140.185 - 17241.009: 98.4276% ( 26) 00:08:15.641 17241.009 - 17341.834: 98.6056% ( 18) 00:08:15.641 17341.834 - 17442.658: 98.7045% ( 10) 00:08:15.641 17442.658 - 17543.483: 98.7342% ( 3) 00:08:15.641 21778.117 - 21878.942: 98.7540% ( 2) 00:08:15.641 21878.942 - 21979.766: 98.8232% ( 7) 00:08:15.641 21979.766 - 22080.591: 98.8825% ( 6) 00:08:15.641 22080.591 - 22181.415: 98.9517% ( 7) 00:08:15.641 22181.415 - 22282.240: 99.0111% ( 6) 00:08:15.641 22282.240 - 22383.065: 99.0803% ( 7) 00:08:15.641 22383.065 - 22483.889: 99.1396% ( 6) 00:08:15.641 22483.889 - 22584.714: 99.2089% ( 7) 00:08:15.641 22584.714 - 22685.538: 99.2682% ( 6) 00:08:15.641 22685.538 - 22786.363: 99.3275% ( 6) 00:08:15.641 22786.363 - 22887.188: 99.3671% ( 4) 00:08:15.641 28634.191 - 28835.840: 99.4759% ( 11) 00:08:15.641 28835.840 - 29037.489: 99.6044% ( 13) 00:08:15.641 29037.489 - 29239.138: 99.7330% ( 13) 00:08:15.641 29239.138 - 29440.788: 99.8714% ( 14) 00:08:15.641 29440.788 - 29642.437: 100.0000% ( 13) 00:08:15.641 00:08:15.641 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:15.641 ============================================================================== 00:08:15.641 Range in us Cumulative IO count 00:08:15.641 5923.446 - 5948.652: 0.0099% ( 1) 00:08:15.641 5948.652 - 5973.858: 0.0396% ( 3) 00:08:15.641 5973.858 - 5999.065: 0.1187% ( 8) 00:08:15.641 5999.065 - 6024.271: 0.2275% ( 11) 00:08:15.641 6024.271 - 6049.477: 0.2967% ( 7) 00:08:15.641 6049.477 - 6074.683: 0.3263% ( 3) 00:08:15.641 6074.683 - 6099.889: 0.3461% ( 2) 00:08:15.641 6099.889 - 6125.095: 0.3659% ( 2) 00:08:15.641 6125.095 - 6150.302: 0.3857% ( 2) 00:08:15.641 6150.302 - 6175.508: 0.4055% ( 2) 00:08:15.641 6175.508 - 6200.714: 0.4252% ( 2) 00:08:15.641 6200.714 - 6225.920: 0.4450% ( 2) 00:08:15.641 6225.920 - 6251.126: 0.4549% ( 1) 00:08:15.641 6251.126 - 6276.332: 0.4747% ( 2) 00:08:15.641 6276.332 - 6301.538: 0.4945% ( 2) 00:08:15.641 6301.538 - 6326.745: 0.5142% ( 2) 00:08:15.641 6326.745 - 6351.951: 0.5340% ( 2) 00:08:15.641 6351.951 - 6377.157: 0.5538% ( 2) 00:08:15.641 6377.157 - 6402.363: 0.5736% ( 2) 00:08:15.641 6402.363 - 6427.569: 0.5934% ( 2) 00:08:15.641 6427.569 - 6452.775: 0.6131% ( 2) 00:08:15.641 6452.775 - 6503.188: 0.6329% ( 2) 00:08:15.641 9124.628 - 9175.040: 0.6428% ( 1) 00:08:15.641 9175.040 - 9225.452: 0.6626% ( 2) 00:08:15.641 9225.452 - 9275.865: 0.7219% ( 6) 00:08:15.641 9275.865 - 9326.277: 0.7417% ( 2) 00:08:15.641 9326.277 - 9376.689: 0.7911% ( 5) 00:08:15.641 9376.689 - 9427.102: 0.8307% ( 4) 00:08:15.641 9427.102 - 9477.514: 0.9296% ( 10) 00:08:15.641 9477.514 - 9527.926: 1.0680% ( 14) 00:08:15.641 9527.926 - 9578.338: 1.1175% ( 5) 00:08:15.641 9578.338 - 9628.751: 1.2559% ( 14) 00:08:15.641 9628.751 - 9679.163: 1.4142% ( 16) 00:08:15.641 9679.163 - 9729.575: 1.5526% ( 14) 00:08:15.641 9729.575 - 9779.988: 1.7207% ( 17) 00:08:15.641 9779.988 - 9830.400: 2.0669% ( 35) 00:08:15.641 9830.400 - 9880.812: 2.2943% ( 23) 00:08:15.641 9880.812 - 9931.225: 2.6503% ( 36) 00:08:15.641 9931.225 - 9981.637: 3.0063% ( 36) 00:08:15.641 9981.637 - 10032.049: 3.3228% ( 32) 00:08:15.641 10032.049 - 10082.462: 3.6392% ( 32) 00:08:15.641 10082.462 - 10132.874: 3.9656% ( 33) 00:08:15.641 10132.874 - 10183.286: 4.3315% ( 37) 00:08:15.641 10183.286 - 10233.698: 4.8062% ( 48) 00:08:15.641 10233.698 - 10284.111: 5.2710% ( 47) 00:08:15.641 10284.111 - 10334.523: 5.8248% ( 56) 00:08:15.641 10334.523 - 10384.935: 6.5269% ( 71) 00:08:15.641 10384.935 - 10435.348: 7.3081% ( 79) 00:08:15.641 10435.348 - 10485.760: 8.2081% ( 91) 00:08:15.641 10485.760 - 10536.172: 9.1574% ( 96) 00:08:15.641 10536.172 - 10586.585: 10.1068% ( 96) 00:08:15.641 10586.585 - 10636.997: 11.0067% ( 91) 00:08:15.641 10636.997 - 10687.409: 12.0649% ( 107) 00:08:15.641 10687.409 - 10737.822: 12.9153% ( 86) 00:08:15.641 10737.822 - 10788.234: 14.0032% ( 110) 00:08:15.641 10788.234 - 10838.646: 15.1009% ( 111) 00:08:15.641 10838.646 - 10889.058: 16.2381% ( 115) 00:08:15.641 10889.058 - 10939.471: 17.6028% ( 138) 00:08:15.641 10939.471 - 10989.883: 18.8093% ( 122) 00:08:15.641 10989.883 - 11040.295: 19.8378% ( 104) 00:08:15.641 11040.295 - 11090.708: 21.1531% ( 133) 00:08:15.641 11090.708 - 11141.120: 22.3200% ( 118) 00:08:15.641 11141.120 - 11191.532: 23.3683% ( 106) 00:08:15.641 11191.532 - 11241.945: 24.6341% ( 128) 00:08:15.641 11241.945 - 11292.357: 26.0186% ( 140) 00:08:15.641 11292.357 - 11342.769: 27.2646% ( 126) 00:08:15.641 11342.769 - 11393.182: 29.2820% ( 204) 00:08:15.641 11393.182 - 11443.594: 30.7061% ( 144) 00:08:15.641 11443.594 - 11494.006: 32.0115% ( 132) 00:08:15.641 11494.006 - 11544.418: 33.2971% ( 130) 00:08:15.641 11544.418 - 11594.831: 34.6123% ( 133) 00:08:15.641 11594.831 - 11645.243: 36.0562% ( 146) 00:08:15.641 11645.243 - 11695.655: 37.0748% ( 103) 00:08:15.641 11695.655 - 11746.068: 38.2911% ( 123) 00:08:15.641 11746.068 - 11796.480: 39.6954% ( 142) 00:08:15.641 11796.480 - 11846.892: 40.6942% ( 101) 00:08:15.641 11846.892 - 11897.305: 41.9007% ( 122) 00:08:15.641 11897.305 - 11947.717: 43.3643% ( 148) 00:08:15.641 11947.717 - 11998.129: 44.4324% ( 108) 00:08:15.641 11998.129 - 12048.542: 45.4312% ( 101) 00:08:15.641 12048.542 - 12098.954: 46.4992% ( 108) 00:08:15.641 12098.954 - 12149.366: 47.6958% ( 121) 00:08:15.641 12149.366 - 12199.778: 48.7935% ( 111) 00:08:15.641 12199.778 - 12250.191: 49.6638% ( 88) 00:08:15.641 12250.191 - 12300.603: 50.5044% ( 85) 00:08:15.641 12300.603 - 12351.015: 51.5427% ( 105) 00:08:15.641 12351.015 - 12401.428: 52.5020% ( 97) 00:08:15.641 12401.428 - 12451.840: 53.7579% ( 127) 00:08:15.641 12451.840 - 12502.252: 54.7963% ( 105) 00:08:15.641 12502.252 - 12552.665: 55.8050% ( 102) 00:08:15.641 12552.665 - 12603.077: 56.7939% ( 100) 00:08:15.641 12603.077 - 12653.489: 57.9411% ( 116) 00:08:15.641 12653.489 - 12703.902: 59.1080% ( 118) 00:08:15.641 12703.902 - 12754.314: 60.1266% ( 103) 00:08:15.641 12754.314 - 12804.726: 61.2540% ( 114) 00:08:15.641 12804.726 - 12855.138: 62.2132% ( 97) 00:08:15.641 12855.138 - 12905.551: 62.9747% ( 77) 00:08:15.641 12905.551 - 13006.375: 64.6460% ( 169) 00:08:15.641 13006.375 - 13107.200: 66.4656% ( 184) 00:08:15.641 13107.200 - 13208.025: 67.7907% ( 134) 00:08:15.641 13208.025 - 13308.849: 69.5016% ( 173) 00:08:15.641 13308.849 - 13409.674: 71.0146% ( 153) 00:08:15.641 13409.674 - 13510.498: 72.5969% ( 160) 00:08:15.641 13510.498 - 13611.323: 73.9320% ( 135) 00:08:15.641 13611.323 - 13712.148: 75.2967% ( 138) 00:08:15.642 13712.148 - 13812.972: 77.0273% ( 175) 00:08:15.642 13812.972 - 13913.797: 78.6096% ( 160) 00:08:15.642 13913.797 - 14014.622: 80.5182% ( 193) 00:08:15.642 14014.622 - 14115.446: 82.1005% ( 160) 00:08:15.642 14115.446 - 14216.271: 84.1080% ( 203) 00:08:15.642 14216.271 - 14317.095: 85.9078% ( 182) 00:08:15.642 14317.095 - 14417.920: 86.8275% ( 93) 00:08:15.642 14417.920 - 14518.745: 87.6088% ( 79) 00:08:15.642 14518.745 - 14619.569: 88.0835% ( 48) 00:08:15.642 14619.569 - 14720.394: 88.3505% ( 27) 00:08:15.642 14720.394 - 14821.218: 88.8252% ( 48) 00:08:15.642 14821.218 - 14922.043: 89.3888% ( 57) 00:08:15.642 14922.043 - 15022.868: 89.9624% ( 58) 00:08:15.642 15022.868 - 15123.692: 90.7536% ( 80) 00:08:15.642 15123.692 - 15224.517: 91.7722% ( 103) 00:08:15.642 15224.517 - 15325.342: 92.8105% ( 105) 00:08:15.642 15325.342 - 15426.166: 93.4830% ( 68) 00:08:15.642 15426.166 - 15526.991: 93.9379% ( 46) 00:08:15.642 15526.991 - 15627.815: 94.2642% ( 33) 00:08:15.642 15627.815 - 15728.640: 94.5906% ( 33) 00:08:15.642 15728.640 - 15829.465: 94.7785% ( 19) 00:08:15.642 15829.465 - 15930.289: 94.9862% ( 21) 00:08:15.642 15930.289 - 16031.114: 95.2532% ( 27) 00:08:15.642 16031.114 - 16131.938: 95.5202% ( 27) 00:08:15.642 16131.938 - 16232.763: 95.8267% ( 31) 00:08:15.642 16232.763 - 16333.588: 96.2619% ( 44) 00:08:15.642 16333.588 - 16434.412: 96.4992% ( 24) 00:08:15.642 16434.412 - 16535.237: 96.8651% ( 37) 00:08:15.642 16535.237 - 16636.062: 97.1816% ( 32) 00:08:15.642 16636.062 - 16736.886: 97.5672% ( 39) 00:08:15.642 16736.886 - 16837.711: 97.8244% ( 26) 00:08:15.642 16837.711 - 16938.535: 97.9331% ( 11) 00:08:15.642 16938.535 - 17039.360: 98.0024% ( 7) 00:08:15.642 17039.360 - 17140.185: 98.0617% ( 6) 00:08:15.642 17140.185 - 17241.009: 98.0914% ( 3) 00:08:15.642 17241.009 - 17341.834: 98.1013% ( 1) 00:08:15.642 17543.483 - 17644.308: 98.1309% ( 3) 00:08:15.642 17644.308 - 17745.132: 98.1903% ( 6) 00:08:15.642 17745.132 - 17845.957: 98.2595% ( 7) 00:08:15.642 17845.957 - 17946.782: 98.3287% ( 7) 00:08:15.642 17946.782 - 18047.606: 98.3979% ( 7) 00:08:15.642 18047.606 - 18148.431: 98.4474% ( 5) 00:08:15.642 18148.431 - 18249.255: 98.5067% ( 6) 00:08:15.642 18249.255 - 18350.080: 98.5759% ( 7) 00:08:15.642 18350.080 - 18450.905: 98.6254% ( 5) 00:08:15.642 18450.905 - 18551.729: 98.6650% ( 4) 00:08:15.642 18551.729 - 18652.554: 98.7144% ( 5) 00:08:15.642 18652.554 - 18753.378: 98.7342% ( 2) 00:08:15.642 22483.889 - 22584.714: 98.7935% ( 6) 00:08:15.642 22584.714 - 22685.538: 98.8430% ( 5) 00:08:15.642 22685.538 - 22786.363: 98.9023% ( 6) 00:08:15.642 22786.363 - 22887.188: 98.9616% ( 6) 00:08:15.642 22887.188 - 22988.012: 99.0210% ( 6) 00:08:15.642 22988.012 - 23088.837: 99.0803% ( 6) 00:08:15.642 23088.837 - 23189.662: 99.1297% ( 5) 00:08:15.642 23189.662 - 23290.486: 99.1990% ( 7) 00:08:15.642 23290.486 - 23391.311: 99.2484% ( 5) 00:08:15.642 23391.311 - 23492.135: 99.3078% ( 6) 00:08:15.642 23492.135 - 23592.960: 99.3671% ( 6) 00:08:15.642 29239.138 - 29440.788: 99.3770% ( 1) 00:08:15.642 29440.788 - 29642.437: 99.5055% ( 13) 00:08:15.642 29642.437 - 29844.086: 99.6440% ( 14) 00:08:15.642 29844.086 - 30045.735: 99.7725% ( 13) 00:08:15.642 30045.735 - 30247.385: 99.9110% ( 14) 00:08:15.642 30247.385 - 30449.034: 100.0000% ( 9) 00:08:15.642 00:08:15.642 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:15.642 ============================================================================== 00:08:15.642 Range in us Cumulative IO count 00:08:15.642 5242.880 - 5268.086: 0.0099% ( 1) 00:08:15.642 5268.086 - 5293.292: 0.0593% ( 5) 00:08:15.642 5318.498 - 5343.705: 0.1286% ( 7) 00:08:15.642 5343.705 - 5368.911: 0.1582% ( 3) 00:08:15.642 5368.911 - 5394.117: 0.2373% ( 8) 00:08:15.642 5394.117 - 5419.323: 0.3066% ( 7) 00:08:15.642 5419.323 - 5444.529: 0.3956% ( 9) 00:08:15.642 5444.529 - 5469.735: 0.4153% ( 2) 00:08:15.642 5469.735 - 5494.942: 0.4351% ( 2) 00:08:15.642 5494.942 - 5520.148: 0.4549% ( 2) 00:08:15.642 5520.148 - 5545.354: 0.4747% ( 2) 00:08:15.642 5545.354 - 5570.560: 0.4945% ( 2) 00:08:15.642 5570.560 - 5595.766: 0.5142% ( 2) 00:08:15.642 5595.766 - 5620.972: 0.5340% ( 2) 00:08:15.642 5620.972 - 5646.178: 0.5439% ( 1) 00:08:15.642 5646.178 - 5671.385: 0.5637% ( 2) 00:08:15.642 5671.385 - 5696.591: 0.5835% ( 2) 00:08:15.642 5696.591 - 5721.797: 0.6032% ( 2) 00:08:15.642 5721.797 - 5747.003: 0.6131% ( 1) 00:08:15.642 5747.003 - 5772.209: 0.6329% ( 2) 00:08:15.642 9326.277 - 9376.689: 0.6527% ( 2) 00:08:15.642 9376.689 - 9427.102: 0.7219% ( 7) 00:08:15.642 9427.102 - 9477.514: 0.8505% ( 13) 00:08:15.642 9477.514 - 9527.926: 1.0384% ( 19) 00:08:15.642 9527.926 - 9578.338: 1.1472% ( 11) 00:08:15.642 9578.338 - 9628.751: 1.2856% ( 14) 00:08:15.642 9628.751 - 9679.163: 1.4339% ( 15) 00:08:15.642 9679.163 - 9729.575: 1.6416% ( 21) 00:08:15.642 9729.575 - 9779.988: 1.8394% ( 20) 00:08:15.642 9779.988 - 9830.400: 2.1262% ( 29) 00:08:15.642 9830.400 - 9880.812: 2.5020% ( 38) 00:08:15.642 9880.812 - 9931.225: 2.6998% ( 20) 00:08:15.642 9931.225 - 9981.637: 2.8877% ( 19) 00:08:15.642 9981.637 - 10032.049: 3.1151% ( 23) 00:08:15.642 10032.049 - 10082.462: 3.3228% ( 21) 00:08:15.642 10082.462 - 10132.874: 3.5206% ( 20) 00:08:15.642 10132.874 - 10183.286: 3.7777% ( 26) 00:08:15.642 10183.286 - 10233.698: 4.0348% ( 26) 00:08:15.642 10233.698 - 10284.111: 4.4007% ( 37) 00:08:15.642 10284.111 - 10334.523: 4.8952% ( 50) 00:08:15.642 10334.523 - 10384.935: 5.3995% ( 51) 00:08:15.642 10384.935 - 10435.348: 6.0918% ( 70) 00:08:15.642 10435.348 - 10485.760: 7.1697% ( 109) 00:08:15.642 10485.760 - 10536.172: 8.0400% ( 88) 00:08:15.642 10536.172 - 10586.585: 8.9695% ( 94) 00:08:15.642 10586.585 - 10636.997: 10.0771% ( 112) 00:08:15.642 10636.997 - 10687.409: 11.2737% ( 121) 00:08:15.642 10687.409 - 10737.822: 12.5989% ( 134) 00:08:15.642 10737.822 - 10788.234: 13.5878% ( 100) 00:08:15.642 10788.234 - 10838.646: 14.7844% ( 121) 00:08:15.642 10838.646 - 10889.058: 16.0601% ( 129) 00:08:15.642 10889.058 - 10939.471: 17.3062% ( 126) 00:08:15.642 10939.471 - 10989.883: 18.7401% ( 145) 00:08:15.642 10989.883 - 11040.295: 20.1543% ( 143) 00:08:15.642 11040.295 - 11090.708: 21.7761% ( 164) 00:08:15.642 11090.708 - 11141.120: 23.3979% ( 164) 00:08:15.642 11141.120 - 11191.532: 24.8616% ( 148) 00:08:15.642 11191.532 - 11241.945: 26.2263% ( 138) 00:08:15.642 11241.945 - 11292.357: 27.6305% ( 142) 00:08:15.642 11292.357 - 11342.769: 29.0941% ( 148) 00:08:15.642 11342.769 - 11393.182: 30.5083% ( 143) 00:08:15.642 11393.182 - 11443.594: 31.6851% ( 119) 00:08:15.642 11443.594 - 11494.006: 32.6642% ( 99) 00:08:15.642 11494.006 - 11544.418: 33.5542% ( 90) 00:08:15.642 11544.418 - 11594.831: 34.4146% ( 87) 00:08:15.642 11594.831 - 11645.243: 35.5419% ( 114) 00:08:15.642 11645.243 - 11695.655: 36.5309% ( 100) 00:08:15.642 11695.655 - 11746.068: 37.4802% ( 96) 00:08:15.642 11746.068 - 11796.480: 38.3801% ( 91) 00:08:15.642 11796.480 - 11846.892: 39.2207% ( 85) 00:08:15.642 11846.892 - 11897.305: 40.3085% ( 110) 00:08:15.642 11897.305 - 11947.717: 41.4953% ( 120) 00:08:15.642 11947.717 - 11998.129: 42.9688% ( 149) 00:08:15.642 11998.129 - 12048.542: 44.0862% ( 113) 00:08:15.642 12048.542 - 12098.954: 45.5004% ( 143) 00:08:15.642 12098.954 - 12149.366: 46.9244% ( 144) 00:08:15.642 12149.366 - 12199.778: 47.9925% ( 108) 00:08:15.642 12199.778 - 12250.191: 49.1792% ( 120) 00:08:15.642 12250.191 - 12300.603: 50.3066% ( 114) 00:08:15.642 12300.603 - 12351.015: 51.6021% ( 131) 00:08:15.642 12351.015 - 12401.428: 52.8481% ( 126) 00:08:15.642 12401.428 - 12451.840: 54.2227% ( 139) 00:08:15.642 12451.840 - 12502.252: 55.5281% ( 132) 00:08:15.642 12502.252 - 12552.665: 56.7741% ( 126) 00:08:15.642 12552.665 - 12603.077: 57.9608% ( 120) 00:08:15.642 12603.077 - 12653.489: 58.9399% ( 99) 00:08:15.642 12653.489 - 12703.902: 59.9782% ( 105) 00:08:15.642 12703.902 - 12754.314: 60.9078% ( 94) 00:08:15.642 12754.314 - 12804.726: 62.0847% ( 119) 00:08:15.642 12804.726 - 12855.138: 63.5186% ( 145) 00:08:15.642 12855.138 - 12905.551: 64.6756% ( 117) 00:08:15.642 12905.551 - 13006.375: 66.6337% ( 198) 00:08:15.642 13006.375 - 13107.200: 67.8105% ( 119) 00:08:15.642 13107.200 - 13208.025: 68.8489% ( 105) 00:08:15.642 13208.025 - 13308.849: 69.7587% ( 92) 00:08:15.642 13308.849 - 13409.674: 70.8465% ( 110) 00:08:15.642 13409.674 - 13510.498: 72.0629% ( 123) 00:08:15.642 13510.498 - 13611.323: 73.7243% ( 168) 00:08:15.642 13611.323 - 13712.148: 75.1088% ( 140) 00:08:15.642 13712.148 - 13812.972: 76.5032% ( 141) 00:08:15.642 13812.972 - 13913.797: 78.0162% ( 153) 00:08:15.642 13913.797 - 14014.622: 79.3513% ( 135) 00:08:15.642 14014.622 - 14115.446: 80.7456% ( 141) 00:08:15.642 14115.446 - 14216.271: 81.9027% ( 117) 00:08:15.642 14216.271 - 14317.095: 83.4157% ( 153) 00:08:15.642 14317.095 - 14417.920: 84.6123% ( 121) 00:08:15.642 14417.920 - 14518.745: 86.0265% ( 143) 00:08:15.642 14518.745 - 14619.569: 87.2132% ( 120) 00:08:15.642 14619.569 - 14720.394: 88.2219% ( 102) 00:08:15.642 14720.394 - 14821.218: 88.8647% ( 65) 00:08:15.642 14821.218 - 14922.043: 89.6064% ( 75) 00:08:15.642 14922.043 - 15022.868: 90.3184% ( 72) 00:08:15.642 15022.868 - 15123.692: 91.1590% ( 85) 00:08:15.642 15123.692 - 15224.517: 91.9007% ( 75) 00:08:15.642 15224.517 - 15325.342: 92.7215% ( 83) 00:08:15.642 15325.342 - 15426.166: 93.6412% ( 93) 00:08:15.642 15426.166 - 15526.991: 94.5214% ( 89) 00:08:15.642 15526.991 - 15627.815: 95.0850% ( 57) 00:08:15.642 15627.815 - 15728.640: 95.5400% ( 46) 00:08:15.642 15728.640 - 15829.465: 95.8663% ( 33) 00:08:15.643 15829.465 - 15930.289: 96.1729% ( 31) 00:08:15.643 15930.289 - 16031.114: 96.4893% ( 32) 00:08:15.643 16031.114 - 16131.938: 96.7168% ( 23) 00:08:15.643 16131.938 - 16232.763: 96.9838% ( 27) 00:08:15.643 16232.763 - 16333.588: 97.2706% ( 29) 00:08:15.643 16333.588 - 16434.412: 97.6464% ( 38) 00:08:15.643 16434.412 - 16535.237: 97.8343% ( 19) 00:08:15.643 16535.237 - 16636.062: 97.9529% ( 12) 00:08:15.643 16636.062 - 16736.886: 98.0123% ( 6) 00:08:15.643 16736.886 - 16837.711: 98.0716% ( 6) 00:08:15.643 16837.711 - 16938.535: 98.1013% ( 3) 00:08:15.643 16938.535 - 17039.360: 98.1408% ( 4) 00:08:15.643 17039.360 - 17140.185: 98.2002% ( 6) 00:08:15.643 17140.185 - 17241.009: 98.2595% ( 6) 00:08:15.643 17241.009 - 17341.834: 98.2991% ( 4) 00:08:15.643 17341.834 - 17442.658: 98.3782% ( 8) 00:08:15.643 17442.658 - 17543.483: 98.4276% ( 5) 00:08:15.643 17543.483 - 17644.308: 98.5166% ( 9) 00:08:15.643 17644.308 - 17745.132: 98.6155% ( 10) 00:08:15.643 17745.132 - 17845.957: 98.6650% ( 5) 00:08:15.643 17845.957 - 17946.782: 98.7045% ( 4) 00:08:15.643 17946.782 - 18047.606: 98.7342% ( 3) 00:08:15.643 22181.415 - 22282.240: 98.8034% ( 7) 00:08:15.643 22282.240 - 22383.065: 98.8627% ( 6) 00:08:15.643 22383.065 - 22483.889: 98.9320% ( 7) 00:08:15.643 22483.889 - 22584.714: 98.9814% ( 5) 00:08:15.643 22584.714 - 22685.538: 99.0506% ( 7) 00:08:15.643 22685.538 - 22786.363: 99.1100% ( 6) 00:08:15.643 22786.363 - 22887.188: 99.1792% ( 7) 00:08:15.643 22887.188 - 22988.012: 99.2484% ( 7) 00:08:15.643 22988.012 - 23088.837: 99.3176% ( 7) 00:08:15.643 23088.837 - 23189.662: 99.3671% ( 5) 00:08:15.643 29642.437 - 29844.086: 99.4066% ( 4) 00:08:15.643 29844.086 - 30045.735: 99.5451% ( 14) 00:08:15.643 30045.735 - 30247.385: 99.6737% ( 13) 00:08:15.643 30247.385 - 30449.034: 99.8022% ( 13) 00:08:15.643 30449.034 - 30650.683: 99.9407% ( 14) 00:08:15.643 30650.683 - 30852.332: 100.0000% ( 6) 00:08:15.643 00:08:15.643 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:15.643 ============================================================================== 00:08:15.643 Range in us Cumulative IO count 00:08:15.643 4612.726 - 4637.932: 0.0198% ( 2) 00:08:15.643 4637.932 - 4663.138: 0.0494% ( 3) 00:08:15.643 4663.138 - 4688.345: 0.0692% ( 2) 00:08:15.643 4688.345 - 4713.551: 0.0989% ( 3) 00:08:15.643 4713.551 - 4738.757: 0.1286% ( 3) 00:08:15.643 4738.757 - 4763.963: 0.1978% ( 7) 00:08:15.643 4763.963 - 4789.169: 0.2472% ( 5) 00:08:15.643 4789.169 - 4814.375: 0.3461% ( 10) 00:08:15.643 4814.375 - 4839.582: 0.3857% ( 4) 00:08:15.643 4839.582 - 4864.788: 0.4351% ( 5) 00:08:15.643 4864.788 - 4889.994: 0.4846% ( 5) 00:08:15.643 4889.994 - 4915.200: 0.5044% ( 2) 00:08:15.643 4915.200 - 4940.406: 0.5340% ( 3) 00:08:15.643 4940.406 - 4965.612: 0.5538% ( 2) 00:08:15.643 4965.612 - 4990.818: 0.5736% ( 2) 00:08:15.643 4990.818 - 5016.025: 0.6032% ( 3) 00:08:15.643 5016.025 - 5041.231: 0.6230% ( 2) 00:08:15.643 5041.231 - 5066.437: 0.6329% ( 1) 00:08:15.643 8771.742 - 8822.154: 0.6428% ( 1) 00:08:15.643 8822.154 - 8872.566: 0.6725% ( 3) 00:08:15.643 8872.566 - 8922.978: 0.7120% ( 4) 00:08:15.643 8922.978 - 8973.391: 0.7516% ( 4) 00:08:15.643 8973.391 - 9023.803: 0.7911% ( 4) 00:08:15.643 9023.803 - 9074.215: 0.8900% ( 10) 00:08:15.643 9074.215 - 9124.628: 1.0186% ( 13) 00:08:15.643 9124.628 - 9175.040: 1.0878% ( 7) 00:08:15.643 9175.040 - 9225.452: 1.1274% ( 4) 00:08:15.643 9225.452 - 9275.865: 1.1669% ( 4) 00:08:15.643 9275.865 - 9326.277: 1.2065% ( 4) 00:08:15.643 9326.277 - 9376.689: 1.2757% ( 7) 00:08:15.643 9376.689 - 9427.102: 1.3252% ( 5) 00:08:15.643 9427.102 - 9477.514: 1.3548% ( 3) 00:08:15.643 9477.514 - 9527.926: 1.4043% ( 5) 00:08:15.643 9527.926 - 9578.338: 1.4241% ( 2) 00:08:15.643 9578.338 - 9628.751: 1.4636% ( 4) 00:08:15.643 9628.751 - 9679.163: 1.6317% ( 17) 00:08:15.643 9679.163 - 9729.575: 1.6713% ( 4) 00:08:15.643 9729.575 - 9779.988: 1.7306% ( 6) 00:08:15.643 9779.988 - 9830.400: 1.7998% ( 7) 00:08:15.643 9830.400 - 9880.812: 1.8987% ( 10) 00:08:15.643 9880.812 - 9931.225: 2.0273% ( 13) 00:08:15.643 9931.225 - 9981.637: 2.2152% ( 19) 00:08:15.643 9981.637 - 10032.049: 2.5415% ( 33) 00:08:15.643 10032.049 - 10082.462: 3.1448% ( 61) 00:08:15.643 10082.462 - 10132.874: 3.6689% ( 53) 00:08:15.643 10132.874 - 10183.286: 4.1139% ( 45) 00:08:15.643 10183.286 - 10233.698: 4.6479% ( 54) 00:08:15.643 10233.698 - 10284.111: 5.1523% ( 51) 00:08:15.643 10284.111 - 10334.523: 5.7654% ( 62) 00:08:15.643 10334.523 - 10384.935: 6.6851% ( 93) 00:08:15.643 10384.935 - 10435.348: 7.4565% ( 78) 00:08:15.643 10435.348 - 10485.760: 8.2773% ( 83) 00:08:15.643 10485.760 - 10536.172: 9.0684% ( 80) 00:08:15.643 10536.172 - 10586.585: 9.8497% ( 79) 00:08:15.643 10586.585 - 10636.997: 10.9078% ( 107) 00:08:15.643 10636.997 - 10687.409: 11.9264% ( 103) 00:08:15.643 10687.409 - 10737.822: 12.9450% ( 103) 00:08:15.643 10737.822 - 10788.234: 14.0032% ( 107) 00:08:15.643 10788.234 - 10838.646: 15.0613% ( 107) 00:08:15.643 10838.646 - 10889.058: 16.3964% ( 135) 00:08:15.643 10889.058 - 10939.471: 18.0083% ( 163) 00:08:15.643 10939.471 - 10989.883: 19.3631% ( 137) 00:08:15.643 10989.883 - 11040.295: 20.5894% ( 124) 00:08:15.643 11040.295 - 11090.708: 21.8651% ( 129) 00:08:15.643 11090.708 - 11141.120: 23.0320% ( 118) 00:08:15.643 11141.120 - 11191.532: 24.3275% ( 131) 00:08:15.643 11191.532 - 11241.945: 25.5637% ( 125) 00:08:15.643 11241.945 - 11292.357: 26.4339% ( 88) 00:08:15.643 11292.357 - 11342.769: 27.5020% ( 108) 00:08:15.643 11342.769 - 11393.182: 28.4118% ( 92) 00:08:15.643 11393.182 - 11443.594: 29.1930% ( 79) 00:08:15.643 11443.594 - 11494.006: 29.9941% ( 81) 00:08:15.643 11494.006 - 11544.418: 30.9533% ( 97) 00:08:15.643 11544.418 - 11594.831: 32.1203% ( 118) 00:08:15.643 11594.831 - 11645.243: 33.4652% ( 136) 00:08:15.643 11645.243 - 11695.655: 34.8497% ( 140) 00:08:15.643 11695.655 - 11746.068: 36.3430% ( 151) 00:08:15.643 11746.068 - 11796.480: 37.5692% ( 124) 00:08:15.643 11796.480 - 11846.892: 38.6867% ( 113) 00:08:15.643 11846.892 - 11897.305: 39.8438% ( 117) 00:08:15.643 11897.305 - 11947.717: 40.9217% ( 109) 00:08:15.643 11947.717 - 11998.129: 42.2369% ( 133) 00:08:15.643 11998.129 - 12048.542: 43.5720% ( 135) 00:08:15.643 12048.542 - 12098.954: 44.8081% ( 125) 00:08:15.643 12098.954 - 12149.366: 45.9751% ( 118) 00:08:15.643 12149.366 - 12199.778: 47.1816% ( 122) 00:08:15.643 12199.778 - 12250.191: 48.7441% ( 158) 00:08:15.643 12250.191 - 12300.603: 50.1780% ( 145) 00:08:15.643 12300.603 - 12351.015: 51.3944% ( 123) 00:08:15.643 12351.015 - 12401.428: 52.7591% ( 138) 00:08:15.643 12401.428 - 12451.840: 53.8667% ( 112) 00:08:15.643 12451.840 - 12502.252: 55.1226% ( 127) 00:08:15.643 12502.252 - 12552.665: 56.5368% ( 143) 00:08:15.643 12552.665 - 12603.077: 57.8619% ( 134) 00:08:15.643 12603.077 - 12653.489: 59.0783% ( 123) 00:08:15.643 12653.489 - 12703.902: 60.2650% ( 120) 00:08:15.643 12703.902 - 12754.314: 61.3726% ( 112) 00:08:15.643 12754.314 - 12804.726: 62.5297% ( 117) 00:08:15.643 12804.726 - 12855.138: 63.4395% ( 92) 00:08:15.643 12855.138 - 12905.551: 64.6163% ( 119) 00:08:15.643 12905.551 - 13006.375: 66.7722% ( 218) 00:08:15.643 13006.375 - 13107.200: 68.8192% ( 207) 00:08:15.643 13107.200 - 13208.025: 70.3521% ( 155) 00:08:15.643 13208.025 - 13308.849: 72.0431% ( 171) 00:08:15.643 13308.849 - 13409.674: 73.5364% ( 151) 00:08:15.643 13409.674 - 13510.498: 74.5154% ( 99) 00:08:15.643 13510.498 - 13611.323: 75.5439% ( 104) 00:08:15.643 13611.323 - 13712.148: 76.6515% ( 112) 00:08:15.643 13712.148 - 13812.972: 78.0657% ( 143) 00:08:15.643 13812.972 - 13913.797: 79.1139% ( 106) 00:08:15.643 13913.797 - 14014.622: 80.2116% ( 111) 00:08:15.643 14014.622 - 14115.446: 81.2006% ( 100) 00:08:15.643 14115.446 - 14216.271: 82.0906% ( 90) 00:08:15.643 14216.271 - 14317.095: 83.0103% ( 93) 00:08:15.643 14317.095 - 14417.920: 84.0981% ( 110) 00:08:15.643 14417.920 - 14518.745: 85.3738% ( 129) 00:08:15.643 14518.745 - 14619.569: 86.3825% ( 102) 00:08:15.643 14619.569 - 14720.394: 87.4110% ( 104) 00:08:15.643 14720.394 - 14821.218: 88.4988% ( 110) 00:08:15.643 14821.218 - 14922.043: 89.5669% ( 108) 00:08:15.643 14922.043 - 15022.868: 90.5558% ( 100) 00:08:15.643 15022.868 - 15123.692: 91.6337% ( 109) 00:08:15.643 15123.692 - 15224.517: 92.5732% ( 95) 00:08:15.643 15224.517 - 15325.342: 93.2753% ( 71) 00:08:15.643 15325.342 - 15426.166: 93.9379% ( 67) 00:08:15.643 15426.166 - 15526.991: 94.4917% ( 56) 00:08:15.643 15526.991 - 15627.815: 95.1543% ( 67) 00:08:15.643 15627.815 - 15728.640: 95.6982% ( 55) 00:08:15.643 15728.640 - 15829.465: 96.1531% ( 46) 00:08:15.643 15829.465 - 15930.289: 96.5783% ( 43) 00:08:15.643 15930.289 - 16031.114: 96.7464% ( 17) 00:08:15.643 16031.114 - 16131.938: 96.8948% ( 15) 00:08:15.643 16131.938 - 16232.763: 97.0530% ( 16) 00:08:15.643 16232.763 - 16333.588: 97.3991% ( 35) 00:08:15.643 16333.588 - 16434.412: 97.7453% ( 35) 00:08:15.643 16434.412 - 16535.237: 97.9233% ( 18) 00:08:15.643 16535.237 - 16636.062: 98.0320% ( 11) 00:08:15.643 16636.062 - 16736.886: 98.1013% ( 7) 00:08:15.643 16938.535 - 17039.360: 98.1112% ( 1) 00:08:15.643 17039.360 - 17140.185: 98.1507% ( 4) 00:08:15.643 17140.185 - 17241.009: 98.2100% ( 6) 00:08:15.643 17241.009 - 17341.834: 98.2496% ( 4) 00:08:15.643 17341.834 - 17442.658: 98.2991% ( 5) 00:08:15.643 17442.658 - 17543.483: 98.3584% ( 6) 00:08:15.643 17543.483 - 17644.308: 98.4078% ( 5) 00:08:15.644 17644.308 - 17745.132: 98.4474% ( 4) 00:08:15.644 17745.132 - 17845.957: 98.5265% ( 8) 00:08:15.644 17845.957 - 17946.782: 98.5957% ( 7) 00:08:15.644 17946.782 - 18047.606: 98.6551% ( 6) 00:08:15.644 18047.606 - 18148.431: 98.6946% ( 4) 00:08:15.644 18148.431 - 18249.255: 98.7243% ( 3) 00:08:15.644 18249.255 - 18350.080: 98.7342% ( 1) 00:08:15.644 22282.240 - 22383.065: 98.7441% ( 1) 00:08:15.644 22383.065 - 22483.889: 98.7836% ( 4) 00:08:15.644 22483.889 - 22584.714: 98.8331% ( 5) 00:08:15.644 22584.714 - 22685.538: 98.8726% ( 4) 00:08:15.644 22685.538 - 22786.363: 98.9122% ( 4) 00:08:15.644 22786.363 - 22887.188: 98.9517% ( 4) 00:08:15.644 22887.188 - 22988.012: 98.9814% ( 3) 00:08:15.644 22988.012 - 23088.837: 99.0309% ( 5) 00:08:15.644 23088.837 - 23189.662: 99.0803% ( 5) 00:08:15.644 23189.662 - 23290.486: 99.1396% ( 6) 00:08:15.644 23290.486 - 23391.311: 99.2089% ( 7) 00:08:15.644 23391.311 - 23492.135: 99.2781% ( 7) 00:08:15.644 23492.135 - 23592.960: 99.3473% ( 7) 00:08:15.644 23592.960 - 23693.785: 99.3671% ( 2) 00:08:15.644 29844.086 - 30045.735: 99.4066% ( 4) 00:08:15.644 30045.735 - 30247.385: 99.5352% ( 13) 00:08:15.644 30247.385 - 30449.034: 99.6737% ( 14) 00:08:15.644 30449.034 - 30650.683: 99.8022% ( 13) 00:08:15.644 30650.683 - 30852.332: 99.9308% ( 13) 00:08:15.644 30852.332 - 31053.982: 100.0000% ( 7) 00:08:15.644 00:08:15.644 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:15.644 ============================================================================== 00:08:15.644 Range in us Cumulative IO count 00:08:15.644 3856.542 - 3881.748: 0.0197% ( 2) 00:08:15.644 3881.748 - 3906.954: 0.0491% ( 3) 00:08:15.644 3906.954 - 3932.160: 0.0786% ( 3) 00:08:15.644 3932.160 - 3957.366: 0.1179% ( 4) 00:08:15.644 3957.366 - 3982.572: 0.1867% ( 7) 00:08:15.644 3982.572 - 4007.778: 0.2653% ( 8) 00:08:15.644 4007.778 - 4032.985: 0.2948% ( 3) 00:08:15.644 4032.985 - 4058.191: 0.3145% ( 2) 00:08:15.644 4058.191 - 4083.397: 0.3439% ( 3) 00:08:15.644 4083.397 - 4108.603: 0.3636% ( 2) 00:08:15.644 4108.603 - 4133.809: 0.3931% ( 3) 00:08:15.644 4133.809 - 4159.015: 0.4127% ( 2) 00:08:15.644 4159.015 - 4184.222: 0.4422% ( 3) 00:08:15.644 4184.222 - 4209.428: 0.4619% ( 2) 00:08:15.644 4209.428 - 4234.634: 0.4914% ( 3) 00:08:15.644 4234.634 - 4259.840: 0.5110% ( 2) 00:08:15.644 4259.840 - 4285.046: 0.5405% ( 3) 00:08:15.644 4285.046 - 4310.252: 0.5601% ( 2) 00:08:15.644 4310.252 - 4335.458: 0.5798% ( 2) 00:08:15.644 4335.458 - 4360.665: 0.6093% ( 3) 00:08:15.644 4360.665 - 4385.871: 0.6191% ( 1) 00:08:15.644 4385.871 - 4411.077: 0.6289% ( 1) 00:08:15.644 8267.618 - 8318.031: 0.6682% ( 4) 00:08:15.644 8318.031 - 8368.443: 0.7272% ( 6) 00:08:15.644 8368.443 - 8418.855: 0.7763% ( 5) 00:08:15.644 8418.855 - 8469.268: 0.8746% ( 10) 00:08:15.644 8469.268 - 8519.680: 0.9139% ( 4) 00:08:15.644 8519.680 - 8570.092: 0.9336% ( 2) 00:08:15.644 8570.092 - 8620.505: 0.9729% ( 4) 00:08:15.644 8620.505 - 8670.917: 1.0220% ( 5) 00:08:15.644 8670.917 - 8721.329: 1.0613% ( 4) 00:08:15.644 8721.329 - 8771.742: 1.1006% ( 4) 00:08:15.644 8771.742 - 8822.154: 1.1399% ( 4) 00:08:15.644 8822.154 - 8872.566: 1.1792% ( 4) 00:08:15.644 8872.566 - 8922.978: 1.2087% ( 3) 00:08:15.644 8922.978 - 8973.391: 1.2579% ( 5) 00:08:15.644 9376.689 - 9427.102: 1.2677% ( 1) 00:08:15.644 9427.102 - 9477.514: 1.2775% ( 1) 00:08:15.644 9477.514 - 9527.926: 1.3463% ( 7) 00:08:15.644 9527.926 - 9578.338: 1.4151% ( 7) 00:08:15.644 9578.338 - 9628.751: 1.5330% ( 12) 00:08:15.644 9628.751 - 9679.163: 1.6411% ( 11) 00:08:15.644 9679.163 - 9729.575: 1.7590% ( 12) 00:08:15.644 9729.575 - 9779.988: 1.8278% ( 7) 00:08:15.644 9779.988 - 9830.400: 1.8966% ( 7) 00:08:15.644 9830.400 - 9880.812: 2.0637% ( 17) 00:08:15.644 9880.812 - 9931.225: 2.2897% ( 23) 00:08:15.644 9931.225 - 9981.637: 2.4666% ( 18) 00:08:15.644 9981.637 - 10032.049: 2.7516% ( 29) 00:08:15.644 10032.049 - 10082.462: 3.0071% ( 26) 00:08:15.644 10082.462 - 10132.874: 3.2822% ( 28) 00:08:15.644 10132.874 - 10183.286: 3.7146% ( 44) 00:08:15.644 10183.286 - 10233.698: 4.1372% ( 43) 00:08:15.644 10233.698 - 10284.111: 4.6678% ( 54) 00:08:15.644 10284.111 - 10334.523: 5.4737% ( 82) 00:08:15.644 10334.523 - 10384.935: 6.4564% ( 100) 00:08:15.644 10384.935 - 10435.348: 7.2327% ( 79) 00:08:15.644 10435.348 - 10485.760: 8.0385% ( 82) 00:08:15.644 10485.760 - 10536.172: 8.9230% ( 90) 00:08:15.644 10536.172 - 10586.585: 9.7877% ( 88) 00:08:15.644 10586.585 - 10636.997: 10.5248% ( 75) 00:08:15.644 10636.997 - 10687.409: 11.2716% ( 76) 00:08:15.644 10687.409 - 10737.822: 12.2936% ( 104) 00:08:15.644 10737.822 - 10788.234: 13.3255% ( 105) 00:08:15.644 10788.234 - 10838.646: 14.3377% ( 103) 00:08:15.644 10838.646 - 10889.058: 15.4088% ( 109) 00:08:15.644 10889.058 - 10939.471: 16.4800% ( 109) 00:08:15.644 10939.471 - 10989.883: 17.9737% ( 152) 00:08:15.644 10989.883 - 11040.295: 19.4477% ( 150) 00:08:15.644 11040.295 - 11090.708: 20.6663% ( 124) 00:08:15.644 11090.708 - 11141.120: 21.9045% ( 126) 00:08:15.644 11141.120 - 11191.532: 23.6635% ( 179) 00:08:15.644 11191.532 - 11241.945: 25.1769% ( 154) 00:08:15.644 11241.945 - 11292.357: 26.3856% ( 123) 00:08:15.644 11292.357 - 11342.769: 27.3978% ( 103) 00:08:15.644 11342.769 - 11393.182: 28.3510% ( 97) 00:08:15.644 11393.182 - 11443.594: 29.2551% ( 92) 00:08:15.644 11443.594 - 11494.006: 30.1101% ( 87) 00:08:15.644 11494.006 - 11544.418: 30.9650% ( 87) 00:08:15.644 11544.418 - 11594.831: 31.8003% ( 85) 00:08:15.644 11594.831 - 11645.243: 32.8125% ( 103) 00:08:15.644 11645.243 - 11695.655: 34.0212% ( 123) 00:08:15.644 11695.655 - 11746.068: 35.1513% ( 115) 00:08:15.644 11746.068 - 11796.480: 36.5075% ( 138) 00:08:15.644 11796.480 - 11846.892: 37.7162% ( 123) 00:08:15.644 11846.892 - 11897.305: 39.1116% ( 142) 00:08:15.644 11897.305 - 11947.717: 40.7331% ( 165) 00:08:15.644 11947.717 - 11998.129: 42.7083% ( 201) 00:08:15.644 11998.129 - 12048.542: 44.2217% ( 154) 00:08:15.644 12048.542 - 12098.954: 45.4009% ( 120) 00:08:15.644 12098.954 - 12149.366: 46.6195% ( 124) 00:08:15.644 12149.366 - 12199.778: 47.7594% ( 116) 00:08:15.644 12199.778 - 12250.191: 49.2630% ( 153) 00:08:15.644 12250.191 - 12300.603: 50.3833% ( 114) 00:08:15.644 12300.603 - 12351.015: 51.9064% ( 155) 00:08:15.644 12351.015 - 12401.428: 53.1053% ( 122) 00:08:15.644 12401.428 - 12451.840: 54.3829% ( 130) 00:08:15.644 12451.840 - 12502.252: 55.6506% ( 129) 00:08:15.644 12502.252 - 12552.665: 56.8298% ( 120) 00:08:15.644 12552.665 - 12603.077: 58.0680% ( 126) 00:08:15.644 12603.077 - 12653.489: 59.4438% ( 140) 00:08:15.644 12653.489 - 12703.902: 60.8589% ( 144) 00:08:15.644 12703.902 - 12754.314: 62.3133% ( 148) 00:08:15.644 12754.314 - 12804.726: 63.1388% ( 84) 00:08:15.644 12804.726 - 12855.138: 63.8168% ( 69) 00:08:15.644 12855.138 - 12905.551: 64.6325% ( 83) 00:08:15.644 12905.551 - 13006.375: 66.1655% ( 156) 00:08:15.644 13006.375 - 13107.200: 67.3546% ( 121) 00:08:15.644 13107.200 - 13208.025: 68.6419% ( 131) 00:08:15.644 13208.025 - 13308.849: 69.9489% ( 133) 00:08:15.644 13308.849 - 13409.674: 71.3738% ( 145) 00:08:15.644 13409.674 - 13510.498: 72.5531% ( 120) 00:08:15.644 13510.498 - 13611.323: 73.6340% ( 110) 00:08:15.644 13611.323 - 13712.148: 75.0590% ( 145) 00:08:15.644 13712.148 - 13812.972: 76.7590% ( 173) 00:08:15.644 13812.972 - 13913.797: 78.0955% ( 136) 00:08:15.644 13913.797 - 14014.622: 79.7268% ( 166) 00:08:15.644 14014.622 - 14115.446: 81.6824% ( 199) 00:08:15.644 14115.446 - 14216.271: 83.5397% ( 189) 00:08:15.644 14216.271 - 14317.095: 85.3381% ( 183) 00:08:15.644 14317.095 - 14417.920: 86.4190% ( 110) 00:08:15.644 14417.920 - 14518.745: 87.2445% ( 84) 00:08:15.644 14518.745 - 14619.569: 88.5318% ( 131) 00:08:15.644 14619.569 - 14720.394: 89.4949% ( 98) 00:08:15.644 14720.394 - 14821.218: 90.3007% ( 82) 00:08:15.644 14821.218 - 14922.043: 91.1360% ( 85) 00:08:15.644 14922.043 - 15022.868: 91.7453% ( 62) 00:08:15.645 15022.868 - 15123.692: 92.3742% ( 64) 00:08:15.645 15123.692 - 15224.517: 92.8852% ( 52) 00:08:15.645 15224.517 - 15325.342: 93.3274% ( 45) 00:08:15.645 15325.342 - 15426.166: 93.8090% ( 49) 00:08:15.645 15426.166 - 15526.991: 94.3298% ( 53) 00:08:15.645 15526.991 - 15627.815: 94.6934% ( 37) 00:08:15.645 15627.815 - 15728.640: 95.4992% ( 82) 00:08:15.645 15728.640 - 15829.465: 96.1183% ( 63) 00:08:15.645 15829.465 - 15930.289: 96.6392% ( 53) 00:08:15.645 15930.289 - 16031.114: 96.9143% ( 28) 00:08:15.645 16031.114 - 16131.938: 97.1207% ( 21) 00:08:15.645 16131.938 - 16232.763: 97.2681% ( 15) 00:08:15.645 16232.763 - 16333.588: 97.4450% ( 18) 00:08:15.645 16333.588 - 16434.412: 97.5825% ( 14) 00:08:15.645 16434.412 - 16535.237: 97.7005% ( 12) 00:08:15.645 16535.237 - 16636.062: 97.8184% ( 12) 00:08:15.645 16636.062 - 16736.886: 97.9461% ( 13) 00:08:15.645 16736.886 - 16837.711: 98.1230% ( 18) 00:08:15.645 16837.711 - 16938.535: 98.3294% ( 21) 00:08:15.645 16938.535 - 17039.360: 98.4965% ( 17) 00:08:15.645 17039.360 - 17140.185: 98.6832% ( 19) 00:08:15.645 17140.185 - 17241.009: 98.8404% ( 16) 00:08:15.645 17241.009 - 17341.834: 98.9485% ( 11) 00:08:15.645 17341.834 - 17442.658: 99.0763% ( 13) 00:08:15.645 17442.658 - 17543.483: 99.2433% ( 17) 00:08:15.645 17543.483 - 17644.308: 99.3023% ( 6) 00:08:15.645 17644.308 - 17745.132: 99.3416% ( 4) 00:08:15.645 17745.132 - 17845.957: 99.3711% ( 3) 00:08:15.645 22483.889 - 22584.714: 99.4202% ( 5) 00:08:15.645 22584.714 - 22685.538: 99.4792% ( 6) 00:08:15.645 22685.538 - 22786.363: 99.5480% ( 7) 00:08:15.645 22786.363 - 22887.188: 99.6167% ( 7) 00:08:15.645 22887.188 - 22988.012: 99.6462% ( 3) 00:08:15.645 22988.012 - 23088.837: 99.7150% ( 7) 00:08:15.645 23088.837 - 23189.662: 99.7740% ( 6) 00:08:15.645 23189.662 - 23290.486: 99.8329% ( 6) 00:08:15.645 23290.486 - 23391.311: 99.8919% ( 6) 00:08:15.645 23391.311 - 23492.135: 99.9607% ( 7) 00:08:15.645 23492.135 - 23592.960: 100.0000% ( 4) 00:08:15.645 00:08:15.645 21:40:38 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:15.645 00:08:15.645 real 0m2.422s 00:08:15.645 user 0m2.163s 00:08:15.645 sys 0m0.149s 00:08:15.645 21:40:38 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:15.645 21:40:38 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:15.645 ************************************ 00:08:15.645 END TEST nvme_perf 00:08:15.645 ************************************ 00:08:15.645 21:40:38 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:15.645 21:40:38 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:15.645 21:40:38 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.645 21:40:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:15.645 ************************************ 00:08:15.645 START TEST nvme_hello_world 00:08:15.645 ************************************ 00:08:15.645 21:40:38 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:15.904 Initializing NVMe Controllers 00:08:15.904 Attached to 0000:00:10.0 00:08:15.904 Namespace ID: 1 size: 6GB 00:08:15.904 Attached to 0000:00:13.0 00:08:15.904 Namespace ID: 1 size: 1GB 00:08:15.904 Attached to 0000:00:11.0 00:08:15.904 Namespace ID: 1 size: 5GB 00:08:15.904 Attached to 0000:00:12.0 00:08:15.904 Namespace ID: 1 size: 4GB 00:08:15.904 Namespace ID: 2 size: 4GB 00:08:15.904 Namespace ID: 3 size: 4GB 00:08:15.904 Initialization complete. 00:08:15.904 INFO: using host memory buffer for IO 00:08:15.904 Hello world! 00:08:15.904 INFO: using host memory buffer for IO 00:08:15.904 Hello world! 00:08:15.904 INFO: using host memory buffer for IO 00:08:15.904 Hello world! 00:08:15.904 INFO: using host memory buffer for IO 00:08:15.904 Hello world! 00:08:15.904 INFO: using host memory buffer for IO 00:08:15.904 Hello world! 00:08:15.904 INFO: using host memory buffer for IO 00:08:15.904 Hello world! 00:08:15.904 00:08:15.904 real 0m0.195s 00:08:15.904 user 0m0.075s 00:08:15.904 sys 0m0.072s 00:08:15.904 21:40:38 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:15.904 21:40:38 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:15.904 ************************************ 00:08:15.904 END TEST nvme_hello_world 00:08:15.904 ************************************ 00:08:15.904 21:40:38 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:15.904 21:40:38 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:15.904 21:40:38 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.904 21:40:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:15.904 ************************************ 00:08:15.904 START TEST nvme_sgl 00:08:15.904 ************************************ 00:08:15.904 21:40:38 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:15.904 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:15.904 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:15.904 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:15.904 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:15.904 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:15.904 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:15.904 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:15.904 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:15.904 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:15.904 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:15.904 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:15.904 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:15.904 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:15.904 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:15.904 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:15.904 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:15.904 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:15.904 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:15.904 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:15.904 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:15.904 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:16.163 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:16.163 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:16.163 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:16.163 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:16.163 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:16.163 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:16.163 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:16.163 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:16.163 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:16.163 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:16.163 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:16.163 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:16.163 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:16.163 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:16.163 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:16.163 NVMe Readv/Writev Request test 00:08:16.163 Attached to 0000:00:10.0 00:08:16.163 Attached to 0000:00:13.0 00:08:16.163 Attached to 0000:00:11.0 00:08:16.163 Attached to 0000:00:12.0 00:08:16.163 0000:00:10.0: build_io_request_2 test passed 00:08:16.163 0000:00:10.0: build_io_request_4 test passed 00:08:16.163 0000:00:10.0: build_io_request_5 test passed 00:08:16.163 0000:00:10.0: build_io_request_6 test passed 00:08:16.163 0000:00:10.0: build_io_request_7 test passed 00:08:16.163 0000:00:10.0: build_io_request_10 test passed 00:08:16.163 0000:00:11.0: build_io_request_2 test passed 00:08:16.163 0000:00:11.0: build_io_request_4 test passed 00:08:16.163 0000:00:11.0: build_io_request_5 test passed 00:08:16.163 0000:00:11.0: build_io_request_6 test passed 00:08:16.163 0000:00:11.0: build_io_request_7 test passed 00:08:16.163 0000:00:11.0: build_io_request_10 test passed 00:08:16.163 Cleaning up... 00:08:16.163 00:08:16.163 real 0m0.226s 00:08:16.163 user 0m0.119s 00:08:16.163 sys 0m0.072s 00:08:16.163 21:40:39 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.163 21:40:39 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:16.163 ************************************ 00:08:16.163 END TEST nvme_sgl 00:08:16.163 ************************************ 00:08:16.163 21:40:39 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:16.163 21:40:39 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:16.163 21:40:39 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.163 21:40:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:16.163 ************************************ 00:08:16.163 START TEST nvme_e2edp 00:08:16.163 ************************************ 00:08:16.163 21:40:39 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:16.421 NVMe Write/Read with End-to-End data protection test 00:08:16.421 Attached to 0000:00:10.0 00:08:16.421 Attached to 0000:00:13.0 00:08:16.421 Attached to 0000:00:11.0 00:08:16.421 Attached to 0000:00:12.0 00:08:16.421 Cleaning up... 00:08:16.421 00:08:16.421 real 0m0.191s 00:08:16.421 user 0m0.054s 00:08:16.421 sys 0m0.078s 00:08:16.421 21:40:39 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.421 21:40:39 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:16.421 ************************************ 00:08:16.421 END TEST nvme_e2edp 00:08:16.421 ************************************ 00:08:16.421 21:40:39 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:16.421 21:40:39 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:16.421 21:40:39 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.421 21:40:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:16.421 ************************************ 00:08:16.421 START TEST nvme_reserve 00:08:16.421 ************************************ 00:08:16.421 21:40:39 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:16.421 ===================================================== 00:08:16.421 NVMe Controller at PCI bus 0, device 16, function 0 00:08:16.421 ===================================================== 00:08:16.421 Reservations: Not Supported 00:08:16.421 ===================================================== 00:08:16.421 NVMe Controller at PCI bus 0, device 19, function 0 00:08:16.421 ===================================================== 00:08:16.421 Reservations: Not Supported 00:08:16.421 ===================================================== 00:08:16.421 NVMe Controller at PCI bus 0, device 17, function 0 00:08:16.421 ===================================================== 00:08:16.421 Reservations: Not Supported 00:08:16.421 ===================================================== 00:08:16.421 NVMe Controller at PCI bus 0, device 18, function 0 00:08:16.421 ===================================================== 00:08:16.421 Reservations: Not Supported 00:08:16.421 Reservation test passed 00:08:16.421 00:08:16.421 real 0m0.165s 00:08:16.421 user 0m0.057s 00:08:16.421 sys 0m0.071s 00:08:16.421 21:40:39 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.421 21:40:39 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:16.421 ************************************ 00:08:16.421 END TEST nvme_reserve 00:08:16.421 ************************************ 00:08:16.421 21:40:39 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:16.421 21:40:39 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:16.421 21:40:39 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.421 21:40:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:16.421 ************************************ 00:08:16.421 START TEST nvme_err_injection 00:08:16.421 ************************************ 00:08:16.421 21:40:39 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:16.679 NVMe Error Injection test 00:08:16.679 Attached to 0000:00:10.0 00:08:16.679 Attached to 0000:00:13.0 00:08:16.679 Attached to 0000:00:11.0 00:08:16.679 Attached to 0000:00:12.0 00:08:16.679 0000:00:10.0: get features failed as expected 00:08:16.679 0000:00:13.0: get features failed as expected 00:08:16.679 0000:00:11.0: get features failed as expected 00:08:16.679 0000:00:12.0: get features failed as expected 00:08:16.679 0000:00:10.0: get features successfully as expected 00:08:16.679 0000:00:13.0: get features successfully as expected 00:08:16.679 0000:00:11.0: get features successfully as expected 00:08:16.679 0000:00:12.0: get features successfully as expected 00:08:16.679 0000:00:10.0: read failed as expected 00:08:16.679 0000:00:13.0: read failed as expected 00:08:16.679 0000:00:11.0: read failed as expected 00:08:16.679 0000:00:12.0: read failed as expected 00:08:16.679 0000:00:10.0: read successfully as expected 00:08:16.679 0000:00:13.0: read successfully as expected 00:08:16.679 0000:00:11.0: read successfully as expected 00:08:16.679 0000:00:12.0: read successfully as expected 00:08:16.679 Cleaning up... 00:08:16.679 00:08:16.679 real 0m0.186s 00:08:16.679 user 0m0.067s 00:08:16.679 sys 0m0.072s 00:08:16.679 21:40:39 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.679 ************************************ 00:08:16.679 21:40:39 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:16.679 END TEST nvme_err_injection 00:08:16.679 ************************************ 00:08:16.679 21:40:39 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:16.679 21:40:39 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:08:16.679 21:40:39 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.679 21:40:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:16.679 ************************************ 00:08:16.679 START TEST nvme_overhead 00:08:16.679 ************************************ 00:08:16.679 21:40:39 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:18.053 Initializing NVMe Controllers 00:08:18.053 Attached to 0000:00:10.0 00:08:18.053 Attached to 0000:00:13.0 00:08:18.053 Attached to 0000:00:11.0 00:08:18.053 Attached to 0000:00:12.0 00:08:18.053 Initialization complete. Launching workers. 00:08:18.053 submit (in ns) avg, min, max = 11173.0, 9895.4, 70049.2 00:08:18.053 complete (in ns) avg, min, max = 7497.7, 7157.7, 248300.0 00:08:18.053 00:08:18.053 Submit histogram 00:08:18.053 ================ 00:08:18.053 Range in us Cumulative Count 00:08:18.053 9.895 - 9.945: 0.0053% ( 1) 00:08:18.053 9.945 - 9.994: 0.0159% ( 2) 00:08:18.053 9.994 - 10.043: 0.0212% ( 1) 00:08:18.053 10.388 - 10.437: 0.0265% ( 1) 00:08:18.053 10.437 - 10.486: 0.0371% ( 2) 00:08:18.053 10.683 - 10.732: 0.0424% ( 1) 00:08:18.053 10.732 - 10.782: 0.3023% ( 49) 00:08:18.053 10.782 - 10.831: 2.4134% ( 398) 00:08:18.053 10.831 - 10.880: 9.4362% ( 1324) 00:08:18.053 10.880 - 10.929: 23.7045% ( 2690) 00:08:18.053 10.929 - 10.978: 43.4785% ( 3728) 00:08:18.053 10.978 - 11.028: 62.5206% ( 3590) 00:08:18.053 11.028 - 11.077: 76.6828% ( 2670) 00:08:18.053 11.077 - 11.126: 84.9838% ( 1565) 00:08:18.053 11.126 - 11.175: 89.3174% ( 817) 00:08:18.053 11.175 - 11.225: 91.3807% ( 389) 00:08:18.053 11.225 - 11.274: 92.4999% ( 211) 00:08:18.053 11.274 - 11.323: 93.1735% ( 127) 00:08:18.053 11.323 - 11.372: 93.6668% ( 93) 00:08:18.053 11.372 - 11.422: 94.0169% ( 66) 00:08:18.053 11.422 - 11.471: 94.3669% ( 66) 00:08:18.053 11.471 - 11.520: 94.6322% ( 50) 00:08:18.053 11.520 - 11.569: 94.9027% ( 51) 00:08:18.053 11.569 - 11.618: 95.1785% ( 52) 00:08:18.053 11.618 - 11.668: 95.3588% ( 34) 00:08:18.053 11.668 - 11.717: 95.5233% ( 31) 00:08:18.053 11.717 - 11.766: 95.6824% ( 30) 00:08:18.053 11.766 - 11.815: 95.7938% ( 21) 00:08:18.053 11.815 - 11.865: 95.9105% ( 22) 00:08:18.053 11.865 - 11.914: 96.0325% ( 23) 00:08:18.053 11.914 - 11.963: 96.1492% ( 22) 00:08:18.053 11.963 - 12.012: 96.2552% ( 20) 00:08:18.053 12.012 - 12.062: 96.3401% ( 16) 00:08:18.053 12.062 - 12.111: 96.4303% ( 17) 00:08:18.053 12.111 - 12.160: 96.4727% ( 8) 00:08:18.053 12.160 - 12.209: 96.5364% ( 12) 00:08:18.053 12.209 - 12.258: 96.6424% ( 20) 00:08:18.053 12.258 - 12.308: 96.7167% ( 14) 00:08:18.053 12.308 - 12.357: 96.7538% ( 7) 00:08:18.053 12.357 - 12.406: 96.7910% ( 7) 00:08:18.053 12.406 - 12.455: 96.8175% ( 5) 00:08:18.053 12.455 - 12.505: 96.8228% ( 1) 00:08:18.053 12.505 - 12.554: 96.8493% ( 5) 00:08:18.053 12.554 - 12.603: 96.8705% ( 4) 00:08:18.053 12.603 - 12.702: 96.9023% ( 6) 00:08:18.053 12.702 - 12.800: 96.9448% ( 8) 00:08:18.053 12.800 - 12.898: 97.0615% ( 22) 00:08:18.053 12.898 - 12.997: 97.1463% ( 16) 00:08:18.053 12.997 - 13.095: 97.2630% ( 22) 00:08:18.053 13.095 - 13.194: 97.4646% ( 38) 00:08:18.053 13.194 - 13.292: 97.6131% ( 28) 00:08:18.053 13.292 - 13.391: 97.7616% ( 28) 00:08:18.053 13.391 - 13.489: 97.8889% ( 24) 00:08:18.053 13.489 - 13.588: 97.9526% ( 12) 00:08:18.053 13.588 - 13.686: 97.9844% ( 6) 00:08:18.053 13.686 - 13.785: 98.0056% ( 4) 00:08:18.053 13.785 - 13.883: 98.0481% ( 8) 00:08:18.053 13.883 - 13.982: 98.0746% ( 5) 00:08:18.053 13.982 - 14.080: 98.0799% ( 1) 00:08:18.053 14.080 - 14.178: 98.1117% ( 6) 00:08:18.053 14.178 - 14.277: 98.1223% ( 2) 00:08:18.053 14.277 - 14.375: 98.1541% ( 6) 00:08:18.053 14.375 - 14.474: 98.1966% ( 8) 00:08:18.053 14.474 - 14.572: 98.2337% ( 7) 00:08:18.053 14.572 - 14.671: 98.2655% ( 6) 00:08:18.053 14.671 - 14.769: 98.3610% ( 18) 00:08:18.053 14.769 - 14.868: 98.4300% ( 13) 00:08:18.053 14.868 - 14.966: 98.4777% ( 9) 00:08:18.053 14.966 - 15.065: 98.4936% ( 3) 00:08:18.053 15.065 - 15.163: 98.5254% ( 6) 00:08:18.053 15.163 - 15.262: 98.5573% ( 6) 00:08:18.053 15.262 - 15.360: 98.5944% ( 7) 00:08:18.054 15.360 - 15.458: 98.6156% ( 4) 00:08:18.054 15.458 - 15.557: 98.6368% ( 4) 00:08:18.054 15.557 - 15.655: 98.6474% ( 2) 00:08:18.054 15.655 - 15.754: 98.6633% ( 3) 00:08:18.054 15.754 - 15.852: 98.6686% ( 1) 00:08:18.054 15.852 - 15.951: 98.6793% ( 2) 00:08:18.054 15.951 - 16.049: 98.6846% ( 1) 00:08:18.054 16.246 - 16.345: 98.7164% ( 6) 00:08:18.054 16.345 - 16.443: 98.7270% ( 2) 00:08:18.054 16.443 - 16.542: 98.7535% ( 5) 00:08:18.054 16.542 - 16.640: 98.8384% ( 16) 00:08:18.054 16.640 - 16.738: 98.9179% ( 15) 00:08:18.054 16.738 - 16.837: 99.0134% ( 18) 00:08:18.054 16.837 - 16.935: 99.0559% ( 8) 00:08:18.054 16.935 - 17.034: 99.1195% ( 12) 00:08:18.054 17.034 - 17.132: 99.1778% ( 11) 00:08:18.054 17.132 - 17.231: 99.2309% ( 10) 00:08:18.054 17.231 - 17.329: 99.3158% ( 16) 00:08:18.054 17.329 - 17.428: 99.3688% ( 10) 00:08:18.054 17.428 - 17.526: 99.4643% ( 18) 00:08:18.054 17.526 - 17.625: 99.5385% ( 14) 00:08:18.054 17.625 - 17.723: 99.5704% ( 6) 00:08:18.054 17.723 - 17.822: 99.6128% ( 8) 00:08:18.054 17.822 - 17.920: 99.6181% ( 1) 00:08:18.054 17.920 - 18.018: 99.6499% ( 6) 00:08:18.054 18.018 - 18.117: 99.6711% ( 4) 00:08:18.054 18.117 - 18.215: 99.6871% ( 3) 00:08:18.054 18.215 - 18.314: 99.7030% ( 3) 00:08:18.054 18.314 - 18.412: 99.7348% ( 6) 00:08:18.054 18.412 - 18.511: 99.7507% ( 3) 00:08:18.054 18.511 - 18.609: 99.7613% ( 2) 00:08:18.054 18.609 - 18.708: 99.7666% ( 1) 00:08:18.054 18.708 - 18.806: 99.7825% ( 3) 00:08:18.054 18.806 - 18.905: 99.7931% ( 2) 00:08:18.054 18.905 - 19.003: 99.8037% ( 2) 00:08:18.054 19.003 - 19.102: 99.8144% ( 2) 00:08:18.054 19.200 - 19.298: 99.8250% ( 2) 00:08:18.054 19.298 - 19.397: 99.8303% ( 1) 00:08:18.054 19.988 - 20.086: 99.8356% ( 1) 00:08:18.054 20.480 - 20.578: 99.8409% ( 1) 00:08:18.054 20.775 - 20.874: 99.8462% ( 1) 00:08:18.054 20.874 - 20.972: 99.8568% ( 2) 00:08:18.054 21.071 - 21.169: 99.8621% ( 1) 00:08:18.054 21.169 - 21.268: 99.8674% ( 1) 00:08:18.054 21.366 - 21.465: 99.8780% ( 2) 00:08:18.054 21.465 - 21.563: 99.8886% ( 2) 00:08:18.054 21.760 - 21.858: 99.8939% ( 1) 00:08:18.054 22.351 - 22.449: 99.8992% ( 1) 00:08:18.054 22.449 - 22.548: 99.9045% ( 1) 00:08:18.054 22.548 - 22.646: 99.9098% ( 1) 00:08:18.054 23.040 - 23.138: 99.9151% ( 1) 00:08:18.054 23.532 - 23.631: 99.9204% ( 1) 00:08:18.054 23.828 - 23.926: 99.9257% ( 1) 00:08:18.054 24.320 - 24.418: 99.9310% ( 1) 00:08:18.054 25.797 - 25.994: 99.9363% ( 1) 00:08:18.054 26.191 - 26.388: 99.9417% ( 1) 00:08:18.054 26.388 - 26.585: 99.9470% ( 1) 00:08:18.054 26.585 - 26.782: 99.9523% ( 1) 00:08:18.054 33.477 - 33.674: 99.9576% ( 1) 00:08:18.054 33.871 - 34.068: 99.9629% ( 1) 00:08:18.054 34.265 - 34.462: 99.9682% ( 1) 00:08:18.054 40.369 - 40.566: 99.9735% ( 1) 00:08:18.054 46.080 - 46.277: 99.9788% ( 1) 00:08:18.054 49.625 - 49.822: 99.9841% ( 1) 00:08:18.054 51.988 - 52.382: 99.9894% ( 1) 00:08:18.054 59.471 - 59.865: 99.9947% ( 1) 00:08:18.054 69.711 - 70.105: 100.0000% ( 1) 00:08:18.054 00:08:18.054 Complete histogram 00:08:18.054 ================== 00:08:18.054 Range in us Cumulative Count 00:08:18.054 7.138 - 7.188: 0.1803% ( 34) 00:08:18.054 7.188 - 7.237: 4.6412% ( 841) 00:08:18.054 7.237 - 7.286: 23.1316% ( 3486) 00:08:18.054 7.286 - 7.335: 50.8195% ( 5220) 00:08:18.054 7.335 - 7.385: 73.0069% ( 4183) 00:08:18.054 7.385 - 7.434: 85.1164% ( 2283) 00:08:18.054 7.434 - 7.483: 90.3835% ( 993) 00:08:18.054 7.483 - 7.532: 92.6166% ( 421) 00:08:18.054 7.532 - 7.582: 94.2025% ( 299) 00:08:18.054 7.582 - 7.631: 95.3164% ( 210) 00:08:18.054 7.631 - 7.680: 95.8468% ( 100) 00:08:18.054 7.680 - 7.729: 96.1226% ( 52) 00:08:18.054 7.729 - 7.778: 96.2075% ( 16) 00:08:18.054 7.778 - 7.828: 96.2871% ( 15) 00:08:18.054 7.828 - 7.877: 96.3189% ( 6) 00:08:18.054 7.877 - 7.926: 96.3242% ( 1) 00:08:18.054 7.926 - 7.975: 96.4356% ( 21) 00:08:18.054 7.975 - 8.025: 96.5204% ( 16) 00:08:18.054 8.025 - 8.074: 96.5629% ( 8) 00:08:18.054 8.074 - 8.123: 96.6531% ( 17) 00:08:18.054 8.123 - 8.172: 96.8281% ( 33) 00:08:18.054 8.172 - 8.222: 97.0986% ( 51) 00:08:18.054 8.222 - 8.271: 97.3903% ( 55) 00:08:18.054 8.271 - 8.320: 97.6237% ( 44) 00:08:18.054 8.320 - 8.369: 97.8041% ( 34) 00:08:18.054 8.369 - 8.418: 97.9048% ( 19) 00:08:18.054 8.418 - 8.468: 98.0003% ( 18) 00:08:18.054 8.468 - 8.517: 98.0268% ( 5) 00:08:18.054 8.517 - 8.566: 98.0428% ( 3) 00:08:18.054 8.566 - 8.615: 98.0534% ( 2) 00:08:18.054 8.615 - 8.665: 98.0640% ( 2) 00:08:18.054 8.665 - 8.714: 98.0746% ( 2) 00:08:18.054 8.714 - 8.763: 98.0905% ( 3) 00:08:18.054 8.763 - 8.812: 98.0958% ( 1) 00:08:18.054 8.960 - 9.009: 98.1011% ( 1) 00:08:18.054 9.009 - 9.058: 98.1064% ( 1) 00:08:18.054 9.058 - 9.108: 98.1276% ( 4) 00:08:18.054 9.108 - 9.157: 98.1382% ( 2) 00:08:18.054 9.157 - 9.206: 98.1435% ( 1) 00:08:18.054 9.255 - 9.305: 98.1541% ( 2) 00:08:18.054 9.305 - 9.354: 98.1594% ( 1) 00:08:18.054 9.354 - 9.403: 98.1647% ( 1) 00:08:18.054 9.403 - 9.452: 98.1701% ( 1) 00:08:18.054 9.502 - 9.551: 98.1754% ( 1) 00:08:18.054 9.649 - 9.698: 98.1860% ( 2) 00:08:18.054 9.698 - 9.748: 98.1913% ( 1) 00:08:18.054 9.748 - 9.797: 98.2019% ( 2) 00:08:18.054 9.797 - 9.846: 98.2125% ( 2) 00:08:18.054 9.846 - 9.895: 98.2178% ( 1) 00:08:18.054 9.895 - 9.945: 98.2337% ( 3) 00:08:18.054 9.945 - 9.994: 98.2496% ( 3) 00:08:18.054 9.994 - 10.043: 98.2655% ( 3) 00:08:18.054 10.043 - 10.092: 98.2761% ( 2) 00:08:18.054 10.092 - 10.142: 98.2974% ( 4) 00:08:18.054 10.142 - 10.191: 98.3133% ( 3) 00:08:18.054 10.191 - 10.240: 98.3186% ( 1) 00:08:18.054 10.240 - 10.289: 98.3239% ( 1) 00:08:18.054 10.289 - 10.338: 98.3557% ( 6) 00:08:18.054 10.338 - 10.388: 98.3716% ( 3) 00:08:18.054 10.388 - 10.437: 98.3875% ( 3) 00:08:18.054 10.486 - 10.535: 98.4034% ( 3) 00:08:18.054 10.535 - 10.585: 98.4087% ( 1) 00:08:18.054 10.585 - 10.634: 98.4193% ( 2) 00:08:18.054 10.634 - 10.683: 98.4300% ( 2) 00:08:18.054 10.683 - 10.732: 98.4565% ( 5) 00:08:18.054 10.782 - 10.831: 98.4618% ( 1) 00:08:18.054 10.929 - 10.978: 98.4671% ( 1) 00:08:18.054 11.077 - 11.126: 98.4724% ( 1) 00:08:18.054 11.126 - 11.175: 98.4777% ( 1) 00:08:18.054 11.274 - 11.323: 98.4883% ( 2) 00:08:18.054 11.471 - 11.520: 98.4936% ( 1) 00:08:18.054 12.012 - 12.062: 98.4989% ( 1) 00:08:18.054 12.209 - 12.258: 98.5042% ( 1) 00:08:18.054 12.505 - 12.554: 98.5095% ( 1) 00:08:18.054 12.554 - 12.603: 98.5148% ( 1) 00:08:18.054 12.702 - 12.800: 98.5360% ( 4) 00:08:18.054 12.800 - 12.898: 98.6050% ( 13) 00:08:18.054 12.898 - 12.997: 98.6368% ( 6) 00:08:18.054 12.997 - 13.095: 98.6740% ( 7) 00:08:18.054 13.095 - 13.194: 98.7588% ( 16) 00:08:18.054 13.194 - 13.292: 98.8119% ( 10) 00:08:18.054 13.292 - 13.391: 98.9179% ( 20) 00:08:18.054 13.391 - 13.489: 99.0081% ( 17) 00:08:18.054 13.489 - 13.588: 99.0983% ( 17) 00:08:18.054 13.588 - 13.686: 99.1938% ( 18) 00:08:18.054 13.686 - 13.785: 99.2786% ( 16) 00:08:18.054 13.785 - 13.883: 99.3423% ( 12) 00:08:18.054 13.883 - 13.982: 99.4325% ( 17) 00:08:18.054 13.982 - 14.080: 99.4749% ( 8) 00:08:18.054 14.080 - 14.178: 99.5491% ( 14) 00:08:18.054 14.178 - 14.277: 99.5969% ( 9) 00:08:18.054 14.277 - 14.375: 99.6446% ( 9) 00:08:18.054 14.375 - 14.474: 99.6764% ( 6) 00:08:18.054 14.474 - 14.572: 99.7030% ( 5) 00:08:18.054 14.572 - 14.671: 99.7136% ( 2) 00:08:18.054 14.671 - 14.769: 99.7401% ( 5) 00:08:18.054 14.769 - 14.868: 99.7454% ( 1) 00:08:18.054 14.868 - 14.966: 99.7560% ( 2) 00:08:18.054 14.966 - 15.065: 99.7666% ( 2) 00:08:18.054 15.065 - 15.163: 99.7772% ( 2) 00:08:18.054 15.163 - 15.262: 99.7825% ( 1) 00:08:18.054 15.557 - 15.655: 99.7878% ( 1) 00:08:18.054 15.655 - 15.754: 99.7931% ( 1) 00:08:18.054 16.049 - 16.148: 99.8037% ( 2) 00:08:18.054 16.148 - 16.246: 99.8144% ( 2) 00:08:18.054 16.246 - 16.345: 99.8197% ( 1) 00:08:18.054 16.738 - 16.837: 99.8303% ( 2) 00:08:18.054 16.935 - 17.034: 99.8356% ( 1) 00:08:18.054 17.034 - 17.132: 99.8409% ( 1) 00:08:18.054 17.132 - 17.231: 99.8515% ( 2) 00:08:18.054 17.231 - 17.329: 99.8568% ( 1) 00:08:18.054 17.428 - 17.526: 99.8621% ( 1) 00:08:18.054 17.920 - 18.018: 99.8674% ( 1) 00:08:18.054 18.018 - 18.117: 99.8727% ( 1) 00:08:18.054 18.215 - 18.314: 99.8780% ( 1) 00:08:18.054 18.314 - 18.412: 99.8833% ( 1) 00:08:18.054 18.609 - 18.708: 99.8886% ( 1) 00:08:18.054 18.708 - 18.806: 99.8939% ( 1) 00:08:18.055 18.806 - 18.905: 99.8992% ( 1) 00:08:18.055 19.102 - 19.200: 99.9045% ( 1) 00:08:18.055 19.200 - 19.298: 99.9098% ( 1) 00:08:18.055 19.692 - 19.791: 99.9151% ( 1) 00:08:18.055 20.086 - 20.185: 99.9204% ( 1) 00:08:18.055 21.858 - 21.957: 99.9257% ( 1) 00:08:18.055 21.957 - 22.055: 99.9310% ( 1) 00:08:18.055 22.055 - 22.154: 99.9363% ( 1) 00:08:18.055 22.351 - 22.449: 99.9417% ( 1) 00:08:18.055 22.548 - 22.646: 99.9470% ( 1) 00:08:18.055 23.040 - 23.138: 99.9523% ( 1) 00:08:18.055 24.025 - 24.123: 99.9576% ( 1) 00:08:18.055 24.320 - 24.418: 99.9629% ( 1) 00:08:18.055 29.538 - 29.735: 99.9682% ( 1) 00:08:18.055 33.871 - 34.068: 99.9735% ( 1) 00:08:18.055 47.852 - 48.049: 99.9788% ( 1) 00:08:18.055 49.034 - 49.231: 99.9841% ( 1) 00:08:18.055 49.625 - 49.822: 99.9894% ( 1) 00:08:18.055 55.138 - 55.532: 99.9947% ( 1) 00:08:18.055 247.335 - 248.911: 100.0000% ( 1) 00:08:18.055 00:08:18.055 00:08:18.055 real 0m1.186s 00:08:18.055 user 0m1.064s 00:08:18.055 sys 0m0.072s 00:08:18.055 21:40:40 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.055 ************************************ 00:08:18.055 21:40:40 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:18.055 END TEST nvme_overhead 00:08:18.055 ************************************ 00:08:18.055 21:40:40 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:18.055 21:40:40 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:18.055 21:40:40 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.055 21:40:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:18.055 ************************************ 00:08:18.055 START TEST nvme_arbitration 00:08:18.055 ************************************ 00:08:18.055 21:40:40 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:21.347 Initializing NVMe Controllers 00:08:21.347 Attached to 0000:00:10.0 00:08:21.347 Attached to 0000:00:13.0 00:08:21.347 Attached to 0000:00:11.0 00:08:21.347 Attached to 0000:00:12.0 00:08:21.347 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:08:21.347 Associating QEMU NVMe Ctrl (12343 ) with lcore 1 00:08:21.347 Associating QEMU NVMe Ctrl (12341 ) with lcore 2 00:08:21.347 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:21.347 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:21.347 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:21.347 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:21.347 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:21.347 Initialization complete. Launching workers. 00:08:21.347 Starting thread on core 1 with urgent priority queue 00:08:21.347 Starting thread on core 2 with urgent priority queue 00:08:21.347 Starting thread on core 3 with urgent priority queue 00:08:21.347 Starting thread on core 0 with urgent priority queue 00:08:21.347 QEMU NVMe Ctrl (12340 ) core 0: 7061.33 IO/s 14.16 secs/100000 ios 00:08:21.347 QEMU NVMe Ctrl (12342 ) core 0: 7061.33 IO/s 14.16 secs/100000 ios 00:08:21.347 QEMU NVMe Ctrl (12343 ) core 1: 6933.33 IO/s 14.42 secs/100000 ios 00:08:21.347 QEMU NVMe Ctrl (12342 ) core 1: 6933.33 IO/s 14.42 secs/100000 ios 00:08:21.347 QEMU NVMe Ctrl (12341 ) core 2: 6506.67 IO/s 15.37 secs/100000 ios 00:08:21.347 QEMU NVMe Ctrl (12342 ) core 3: 6421.67 IO/s 15.57 secs/100000 ios 00:08:21.347 ======================================================== 00:08:21.347 00:08:21.347 00:08:21.347 real 0m3.195s 00:08:21.347 user 0m9.009s 00:08:21.347 sys 0m0.087s 00:08:21.347 21:40:44 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.347 21:40:44 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:21.347 ************************************ 00:08:21.347 END TEST nvme_arbitration 00:08:21.347 ************************************ 00:08:21.347 21:40:44 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:21.347 21:40:44 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:21.347 21:40:44 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.347 21:40:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:21.347 ************************************ 00:08:21.347 START TEST nvme_single_aen 00:08:21.347 ************************************ 00:08:21.347 21:40:44 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:21.347 Asynchronous Event Request test 00:08:21.347 Attached to 0000:00:10.0 00:08:21.347 Attached to 0000:00:13.0 00:08:21.347 Attached to 0000:00:11.0 00:08:21.347 Attached to 0000:00:12.0 00:08:21.347 Reset controller to setup AER completions for this process 00:08:21.347 Registering asynchronous event callbacks... 00:08:21.347 Getting orig temperature thresholds of all controllers 00:08:21.347 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:21.347 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:21.347 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:21.347 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:21.347 Setting all controllers temperature threshold low to trigger AER 00:08:21.347 Waiting for all controllers temperature threshold to be set lower 00:08:21.347 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:21.347 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:21.347 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:21.347 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:21.347 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:21.347 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:21.347 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:21.347 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:21.347 Waiting for all controllers to trigger AER and reset threshold 00:08:21.347 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:21.347 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:21.347 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:21.347 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:21.347 Cleaning up... 00:08:21.347 00:08:21.347 real 0m0.192s 00:08:21.347 user 0m0.057s 00:08:21.347 sys 0m0.086s 00:08:21.347 21:40:44 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.347 21:40:44 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:21.347 ************************************ 00:08:21.347 END TEST nvme_single_aen 00:08:21.347 ************************************ 00:08:21.347 21:40:44 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:21.347 21:40:44 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:21.347 21:40:44 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.347 21:40:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:21.347 ************************************ 00:08:21.347 START TEST nvme_doorbell_aers 00:08:21.347 ************************************ 00:08:21.347 21:40:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:08:21.347 21:40:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:21.347 21:40:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:21.347 21:40:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:21.347 21:40:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:21.347 21:40:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:21.347 21:40:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:08:21.347 21:40:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:21.347 21:40:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:21.347 21:40:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:21.605 21:40:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:21.605 21:40:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:21.605 21:40:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:21.605 21:40:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:21.605 [2024-11-27 21:40:44.660373] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74630) is not found. Dropping the request. 00:08:31.567 Executing: test_write_invalid_db 00:08:31.567 Waiting for AER completion... 00:08:31.567 Failure: test_write_invalid_db 00:08:31.567 00:08:31.567 Executing: test_invalid_db_write_overflow_sq 00:08:31.567 Waiting for AER completion... 00:08:31.567 Failure: test_invalid_db_write_overflow_sq 00:08:31.567 00:08:31.567 Executing: test_invalid_db_write_overflow_cq 00:08:31.567 Waiting for AER completion... 00:08:31.567 Failure: test_invalid_db_write_overflow_cq 00:08:31.567 00:08:31.567 21:40:54 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:31.568 21:40:54 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:31.825 [2024-11-27 21:40:54.707081] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74630) is not found. Dropping the request. 00:08:41.790 Executing: test_write_invalid_db 00:08:41.790 Waiting for AER completion... 00:08:41.790 Failure: test_write_invalid_db 00:08:41.790 00:08:41.790 Executing: test_invalid_db_write_overflow_sq 00:08:41.790 Waiting for AER completion... 00:08:41.790 Failure: test_invalid_db_write_overflow_sq 00:08:41.790 00:08:41.790 Executing: test_invalid_db_write_overflow_cq 00:08:41.790 Waiting for AER completion... 00:08:41.790 Failure: test_invalid_db_write_overflow_cq 00:08:41.790 00:08:41.790 21:41:04 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:41.790 21:41:04 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:41.790 [2024-11-27 21:41:04.720099] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74630) is not found. Dropping the request. 00:08:51.762 Executing: test_write_invalid_db 00:08:51.762 Waiting for AER completion... 00:08:51.762 Failure: test_write_invalid_db 00:08:51.762 00:08:51.762 Executing: test_invalid_db_write_overflow_sq 00:08:51.762 Waiting for AER completion... 00:08:51.762 Failure: test_invalid_db_write_overflow_sq 00:08:51.762 00:08:51.762 Executing: test_invalid_db_write_overflow_cq 00:08:51.762 Waiting for AER completion... 00:08:51.762 Failure: test_invalid_db_write_overflow_cq 00:08:51.762 00:08:51.762 21:41:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:51.762 21:41:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:51.762 [2024-11-27 21:41:14.746709] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74630) is not found. Dropping the request. 00:09:01.759 Executing: test_write_invalid_db 00:09:01.759 Waiting for AER completion... 00:09:01.759 Failure: test_write_invalid_db 00:09:01.759 00:09:01.759 Executing: test_invalid_db_write_overflow_sq 00:09:01.759 Waiting for AER completion... 00:09:01.759 Failure: test_invalid_db_write_overflow_sq 00:09:01.759 00:09:01.759 Executing: test_invalid_db_write_overflow_cq 00:09:01.759 Waiting for AER completion... 00:09:01.759 Failure: test_invalid_db_write_overflow_cq 00:09:01.759 00:09:01.759 00:09:01.759 real 0m40.178s 00:09:01.759 user 0m34.218s 00:09:01.759 sys 0m5.618s 00:09:01.759 21:41:24 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.759 ************************************ 00:09:01.759 END TEST nvme_doorbell_aers 00:09:01.759 ************************************ 00:09:01.759 21:41:24 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:09:01.759 21:41:24 nvme -- nvme/nvme.sh@97 -- # uname 00:09:01.759 21:41:24 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:09:01.759 21:41:24 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:01.759 21:41:24 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:01.759 21:41:24 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.759 21:41:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:01.759 ************************************ 00:09:01.759 START TEST nvme_multi_aen 00:09:01.759 ************************************ 00:09:01.759 21:41:24 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:01.759 [2024-11-27 21:41:24.795655] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74630) is not found. Dropping the request. 00:09:01.759 [2024-11-27 21:41:24.795720] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74630) is not found. Dropping the request. 00:09:01.759 [2024-11-27 21:41:24.795733] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74630) is not found. Dropping the request. 00:09:01.759 [2024-11-27 21:41:24.796896] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74630) is not found. Dropping the request. 00:09:01.759 [2024-11-27 21:41:24.796931] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74630) is not found. Dropping the request. 00:09:01.759 [2024-11-27 21:41:24.796941] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74630) is not found. Dropping the request. 00:09:01.759 [2024-11-27 21:41:24.797899] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74630) is not found. Dropping the request. 00:09:01.759 [2024-11-27 21:41:24.797931] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74630) is not found. Dropping the request. 00:09:01.759 [2024-11-27 21:41:24.797941] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74630) is not found. Dropping the request. 00:09:01.759 [2024-11-27 21:41:24.798892] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74630) is not found. Dropping the request. 00:09:01.759 [2024-11-27 21:41:24.798922] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74630) is not found. Dropping the request. 00:09:01.759 [2024-11-27 21:41:24.798931] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74630) is not found. Dropping the request. 00:09:01.759 Child process pid: 75156 00:09:02.017 [Child] Asynchronous Event Request test 00:09:02.017 [Child] Attached to 0000:00:10.0 00:09:02.017 [Child] Attached to 0000:00:13.0 00:09:02.017 [Child] Attached to 0000:00:11.0 00:09:02.017 [Child] Attached to 0000:00:12.0 00:09:02.017 [Child] Registering asynchronous event callbacks... 00:09:02.017 [Child] Getting orig temperature thresholds of all controllers 00:09:02.017 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:02.017 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:02.017 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:02.017 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:02.017 [Child] Waiting for all controllers to trigger AER and reset threshold 00:09:02.017 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:02.017 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:02.017 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:02.017 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:02.017 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:02.017 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:02.017 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:02.017 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:02.017 [Child] Cleaning up... 00:09:02.017 Asynchronous Event Request test 00:09:02.018 Attached to 0000:00:10.0 00:09:02.018 Attached to 0000:00:13.0 00:09:02.018 Attached to 0000:00:11.0 00:09:02.018 Attached to 0000:00:12.0 00:09:02.018 Reset controller to setup AER completions for this process 00:09:02.018 Registering asynchronous event callbacks... 00:09:02.018 Getting orig temperature thresholds of all controllers 00:09:02.018 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:02.018 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:02.018 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:02.018 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:02.018 Setting all controllers temperature threshold low to trigger AER 00:09:02.018 Waiting for all controllers temperature threshold to be set lower 00:09:02.018 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:02.018 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:02.018 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:02.018 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:02.018 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:02.018 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:02.018 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:02.018 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:02.018 Waiting for all controllers to trigger AER and reset threshold 00:09:02.018 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:02.018 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:02.018 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:02.018 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:02.018 Cleaning up... 00:09:02.018 00:09:02.018 real 0m0.371s 00:09:02.018 user 0m0.127s 00:09:02.018 sys 0m0.145s 00:09:02.018 21:41:25 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.018 21:41:25 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:09:02.018 ************************************ 00:09:02.018 END TEST nvme_multi_aen 00:09:02.018 ************************************ 00:09:02.018 21:41:25 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:02.018 21:41:25 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:02.018 21:41:25 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.018 21:41:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:02.018 ************************************ 00:09:02.018 START TEST nvme_startup 00:09:02.018 ************************************ 00:09:02.018 21:41:25 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:02.276 Initializing NVMe Controllers 00:09:02.276 Attached to 0000:00:10.0 00:09:02.276 Attached to 0000:00:13.0 00:09:02.276 Attached to 0000:00:11.0 00:09:02.276 Attached to 0000:00:12.0 00:09:02.276 Initialization complete. 00:09:02.276 Time used:126233.281 (us). 00:09:02.276 00:09:02.276 real 0m0.176s 00:09:02.276 user 0m0.057s 00:09:02.276 sys 0m0.073s 00:09:02.276 21:41:25 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.276 21:41:25 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:09:02.276 ************************************ 00:09:02.276 END TEST nvme_startup 00:09:02.276 ************************************ 00:09:02.276 21:41:25 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:09:02.276 21:41:25 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:02.276 21:41:25 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.276 21:41:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:02.276 ************************************ 00:09:02.276 START TEST nvme_multi_secondary 00:09:02.276 ************************************ 00:09:02.277 21:41:25 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:09:02.277 21:41:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=75207 00:09:02.277 21:41:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=75208 00:09:02.277 21:41:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:09:02.277 21:41:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:09:02.277 21:41:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:05.562 Initializing NVMe Controllers 00:09:05.562 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:05.562 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:05.562 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:05.562 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:05.562 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:05.562 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:05.562 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:05.562 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:05.562 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:05.562 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:05.562 Initialization complete. Launching workers. 00:09:05.562 ======================================================== 00:09:05.562 Latency(us) 00:09:05.562 Device Information : IOPS MiB/s Average min max 00:09:05.562 PCIE (0000:00:10.0) NSID 1 from core 2: 3184.21 12.44 5021.90 819.98 15582.35 00:09:05.562 PCIE (0000:00:13.0) NSID 1 from core 2: 3184.21 12.44 5023.56 733.87 15586.84 00:09:05.562 PCIE (0000:00:11.0) NSID 1 from core 2: 3184.21 12.44 5024.01 857.07 12388.97 00:09:05.562 PCIE (0000:00:12.0) NSID 1 from core 2: 3184.21 12.44 5024.49 854.75 12370.26 00:09:05.562 PCIE (0000:00:12.0) NSID 2 from core 2: 3184.21 12.44 5024.51 828.42 15258.59 00:09:05.562 PCIE (0000:00:12.0) NSID 3 from core 2: 3184.21 12.44 5024.74 840.52 15396.64 00:09:05.562 ======================================================== 00:09:05.562 Total : 19105.27 74.63 5023.87 733.87 15586.84 00:09:05.562 00:09:05.562 21:41:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 75207 00:09:05.562 Initializing NVMe Controllers 00:09:05.562 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:05.562 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:05.562 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:05.562 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:05.562 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:05.562 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:05.562 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:05.562 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:05.562 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:05.562 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:05.562 Initialization complete. Launching workers. 00:09:05.562 ======================================================== 00:09:05.562 Latency(us) 00:09:05.562 Device Information : IOPS MiB/s Average min max 00:09:05.562 PCIE (0000:00:10.0) NSID 1 from core 1: 7697.87 30.07 2077.07 1066.44 7098.61 00:09:05.562 PCIE (0000:00:13.0) NSID 1 from core 1: 7697.87 30.07 2078.03 1125.32 6428.25 00:09:05.562 PCIE (0000:00:11.0) NSID 1 from core 1: 7697.87 30.07 2078.00 1087.14 5796.46 00:09:05.562 PCIE (0000:00:12.0) NSID 1 from core 1: 7697.87 30.07 2077.96 1040.32 5958.37 00:09:05.562 PCIE (0000:00:12.0) NSID 2 from core 1: 7697.87 30.07 2077.93 1039.99 5897.07 00:09:05.562 PCIE (0000:00:12.0) NSID 3 from core 1: 7697.87 30.07 2077.89 942.83 6417.92 00:09:05.562 ======================================================== 00:09:05.562 Total : 46187.25 180.42 2077.81 942.83 7098.61 00:09:05.562 00:09:07.488 Initializing NVMe Controllers 00:09:07.488 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:07.488 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:07.488 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:07.488 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:07.488 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:07.488 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:07.488 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:07.488 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:07.488 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:07.488 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:07.488 Initialization complete. Launching workers. 00:09:07.488 ======================================================== 00:09:07.488 Latency(us) 00:09:07.488 Device Information : IOPS MiB/s Average min max 00:09:07.488 PCIE (0000:00:10.0) NSID 1 from core 0: 10991.33 42.93 1454.44 680.51 5822.34 00:09:07.488 PCIE (0000:00:13.0) NSID 1 from core 0: 10991.33 42.93 1455.27 661.80 5519.34 00:09:07.488 PCIE (0000:00:11.0) NSID 1 from core 0: 10991.33 42.93 1455.24 574.17 5583.81 00:09:07.488 PCIE (0000:00:12.0) NSID 1 from core 0: 10991.33 42.93 1455.21 504.91 5728.63 00:09:07.488 PCIE (0000:00:12.0) NSID 2 from core 0: 10991.33 42.93 1455.19 450.13 5488.84 00:09:07.488 PCIE (0000:00:12.0) NSID 3 from core 0: 10991.33 42.93 1455.15 385.21 6221.96 00:09:07.488 ======================================================== 00:09:07.488 Total : 65948.00 257.61 1455.08 385.21 6221.96 00:09:07.488 00:09:07.488 21:41:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 75208 00:09:07.488 21:41:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=75277 00:09:07.488 21:41:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:09:07.488 21:41:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=75278 00:09:07.488 21:41:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:09:07.488 21:41:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:10.771 Initializing NVMe Controllers 00:09:10.771 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:10.771 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:10.771 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:10.771 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:10.771 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:10.771 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:10.771 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:10.771 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:10.771 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:10.771 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:10.771 Initialization complete. Launching workers. 00:09:10.771 ======================================================== 00:09:10.771 Latency(us) 00:09:10.771 Device Information : IOPS MiB/s Average min max 00:09:10.771 PCIE (0000:00:10.0) NSID 1 from core 0: 7839.39 30.62 2039.59 711.25 5860.99 00:09:10.771 PCIE (0000:00:13.0) NSID 1 from core 0: 7839.39 30.62 2040.74 705.48 5875.31 00:09:10.771 PCIE (0000:00:11.0) NSID 1 from core 0: 7839.39 30.62 2040.74 734.40 5731.57 00:09:10.771 PCIE (0000:00:12.0) NSID 1 from core 0: 7839.39 30.62 2040.85 734.89 5556.12 00:09:10.771 PCIE (0000:00:12.0) NSID 2 from core 0: 7839.39 30.62 2040.71 735.22 5805.46 00:09:10.771 PCIE (0000:00:12.0) NSID 3 from core 0: 7839.39 30.62 2040.69 730.33 6102.00 00:09:10.771 ======================================================== 00:09:10.771 Total : 47036.33 183.74 2040.55 705.48 6102.00 00:09:10.771 00:09:10.771 Initializing NVMe Controllers 00:09:10.771 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:10.771 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:10.771 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:10.771 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:10.771 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:10.771 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:10.771 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:10.771 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:10.771 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:10.771 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:10.771 Initialization complete. Launching workers. 00:09:10.771 ======================================================== 00:09:10.771 Latency(us) 00:09:10.771 Device Information : IOPS MiB/s Average min max 00:09:10.771 PCIE (0000:00:10.0) NSID 1 from core 1: 8064.59 31.50 1982.63 712.53 5276.53 00:09:10.771 PCIE (0000:00:13.0) NSID 1 from core 1: 8064.59 31.50 1983.62 729.63 5547.44 00:09:10.771 PCIE (0000:00:11.0) NSID 1 from core 1: 8064.59 31.50 1983.66 732.64 5960.49 00:09:10.771 PCIE (0000:00:12.0) NSID 1 from core 1: 8064.59 31.50 1983.69 725.14 5586.40 00:09:10.771 PCIE (0000:00:12.0) NSID 2 from core 1: 8064.59 31.50 1983.65 743.52 5295.37 00:09:10.771 PCIE (0000:00:12.0) NSID 3 from core 1: 8064.59 31.50 1983.66 731.88 5724.99 00:09:10.771 ======================================================== 00:09:10.771 Total : 48387.56 189.01 1983.48 712.53 5960.49 00:09:10.771 00:09:13.302 Initializing NVMe Controllers 00:09:13.302 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:13.302 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:13.302 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:13.302 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:13.302 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:13.302 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:13.302 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:13.302 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:13.302 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:13.302 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:13.302 Initialization complete. Launching workers. 00:09:13.302 ======================================================== 00:09:13.302 Latency(us) 00:09:13.302 Device Information : IOPS MiB/s Average min max 00:09:13.303 PCIE (0000:00:10.0) NSID 1 from core 2: 4859.43 18.98 3289.91 730.93 12854.55 00:09:13.303 PCIE (0000:00:13.0) NSID 1 from core 2: 4859.43 18.98 3292.14 697.81 12761.24 00:09:13.303 PCIE (0000:00:11.0) NSID 1 from core 2: 4859.43 18.98 3292.07 685.55 13124.86 00:09:13.303 PCIE (0000:00:12.0) NSID 1 from core 2: 4859.43 18.98 3291.99 606.54 13194.69 00:09:13.303 PCIE (0000:00:12.0) NSID 2 from core 2: 4859.43 18.98 3291.57 515.32 13054.03 00:09:13.303 PCIE (0000:00:12.0) NSID 3 from core 2: 4859.43 18.98 3291.66 433.25 12598.95 00:09:13.303 ======================================================== 00:09:13.303 Total : 29156.56 113.89 3291.55 433.25 13194.69 00:09:13.303 00:09:13.303 21:41:35 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 75277 00:09:13.303 21:41:35 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 75278 00:09:13.303 00:09:13.303 real 0m10.706s 00:09:13.303 user 0m18.298s 00:09:13.303 sys 0m0.523s 00:09:13.303 21:41:35 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:13.303 ************************************ 00:09:13.303 END TEST nvme_multi_secondary 00:09:13.303 ************************************ 00:09:13.303 21:41:35 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:13.303 21:41:36 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:13.303 21:41:36 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:13.303 21:41:36 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/74245 ]] 00:09:13.303 21:41:36 nvme -- common/autotest_common.sh@1094 -- # kill 74245 00:09:13.303 21:41:36 nvme -- common/autotest_common.sh@1095 -- # wait 74245 00:09:13.303 [2024-11-27 21:41:36.013379] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75155) is not found. Dropping the request. 00:09:13.303 [2024-11-27 21:41:36.013501] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75155) is not found. Dropping the request. 00:09:13.303 [2024-11-27 21:41:36.013534] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75155) is not found. Dropping the request. 00:09:13.303 [2024-11-27 21:41:36.013569] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75155) is not found. Dropping the request. 00:09:13.303 [2024-11-27 21:41:36.014329] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75155) is not found. Dropping the request. 00:09:13.303 [2024-11-27 21:41:36.014432] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75155) is not found. Dropping the request. 00:09:13.303 [2024-11-27 21:41:36.014462] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75155) is not found. Dropping the request. 00:09:13.303 [2024-11-27 21:41:36.014492] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75155) is not found. Dropping the request. 00:09:13.303 [2024-11-27 21:41:36.015256] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75155) is not found. Dropping the request. 00:09:13.303 [2024-11-27 21:41:36.015355] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75155) is not found. Dropping the request. 00:09:13.303 [2024-11-27 21:41:36.015383] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75155) is not found. Dropping the request. 00:09:13.303 [2024-11-27 21:41:36.015416] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75155) is not found. Dropping the request. 00:09:13.303 [2024-11-27 21:41:36.016149] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75155) is not found. Dropping the request. 00:09:13.303 [2024-11-27 21:41:36.016230] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75155) is not found. Dropping the request. 00:09:13.303 [2024-11-27 21:41:36.016291] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75155) is not found. Dropping the request. 00:09:13.303 [2024-11-27 21:41:36.016322] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75155) is not found. Dropping the request. 00:09:13.303 21:41:36 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:09:13.303 21:41:36 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:09:13.303 21:41:36 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:13.303 21:41:36 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:13.303 21:41:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:13.303 21:41:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:13.303 ************************************ 00:09:13.303 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:13.303 ************************************ 00:09:13.303 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:13.303 * Looking for test storage... 00:09:13.303 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:13.303 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:13.303 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:09:13.303 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:13.303 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:13.303 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:13.303 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:13.303 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:13.303 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:09:13.303 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:09:13.303 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:09:13.303 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:09:13.303 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:09:13.303 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:09:13.303 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:09:13.303 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:13.303 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:09:13.303 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:09:13.303 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:13.303 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:13.303 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:09:13.303 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:09:13.303 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:13.303 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:09:13.303 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:09:13.303 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:09:13.303 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:09:13.303 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:13.303 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:09:13.303 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:09:13.303 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:13.303 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:13.303 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:09:13.303 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:13.303 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:13.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.303 --rc genhtml_branch_coverage=1 00:09:13.303 --rc genhtml_function_coverage=1 00:09:13.303 --rc genhtml_legend=1 00:09:13.303 --rc geninfo_all_blocks=1 00:09:13.303 --rc geninfo_unexecuted_blocks=1 00:09:13.303 00:09:13.303 ' 00:09:13.303 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:13.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.303 --rc genhtml_branch_coverage=1 00:09:13.303 --rc genhtml_function_coverage=1 00:09:13.303 --rc genhtml_legend=1 00:09:13.303 --rc geninfo_all_blocks=1 00:09:13.303 --rc geninfo_unexecuted_blocks=1 00:09:13.303 00:09:13.303 ' 00:09:13.303 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:13.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.303 --rc genhtml_branch_coverage=1 00:09:13.304 --rc genhtml_function_coverage=1 00:09:13.304 --rc genhtml_legend=1 00:09:13.304 --rc geninfo_all_blocks=1 00:09:13.304 --rc geninfo_unexecuted_blocks=1 00:09:13.304 00:09:13.304 ' 00:09:13.304 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:13.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.304 --rc genhtml_branch_coverage=1 00:09:13.304 --rc genhtml_function_coverage=1 00:09:13.304 --rc genhtml_legend=1 00:09:13.304 --rc geninfo_all_blocks=1 00:09:13.304 --rc geninfo_unexecuted_blocks=1 00:09:13.304 00:09:13.304 ' 00:09:13.304 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:13.304 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:13.304 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:13.304 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:13.304 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:13.304 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:13.304 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:13.304 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:09:13.304 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:13.304 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:13.304 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:13.304 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:09:13.304 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:13.304 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:13.304 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:13.304 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:13.304 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:13.304 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:13.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.304 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:13.304 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:13.304 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=75439 00:09:13.304 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:13.304 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 75439 00:09:13.304 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:13.304 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 75439 ']' 00:09:13.304 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.304 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:13.304 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.304 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:13.304 21:41:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:13.304 [2024-11-27 21:41:36.357755] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:09:13.304 [2024-11-27 21:41:36.357872] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75439 ] 00:09:13.563 [2024-11-27 21:41:36.507908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:13.563 [2024-11-27 21:41:36.527234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:13.563 [2024-11-27 21:41:36.527473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:13.563 [2024-11-27 21:41:36.527643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:13.563 [2024-11-27 21:41:36.527772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.130 21:41:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:14.130 21:41:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:09:14.130 21:41:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:14.130 21:41:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.130 21:41:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:14.130 nvme0n1 00:09:14.130 21:41:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.130 21:41:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:14.389 21:41:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_AZU4V.txt 00:09:14.389 21:41:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:14.389 21:41:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.389 21:41:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:14.389 true 00:09:14.389 21:41:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.389 21:41:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:14.389 21:41:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732743697 00:09:14.389 21:41:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=75462 00:09:14.389 21:41:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:14.389 21:41:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:14.389 21:41:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:16.287 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:16.287 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.287 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:16.287 [2024-11-27 21:41:39.273959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:09:16.287 [2024-11-27 21:41:39.274276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:16.287 [2024-11-27 21:41:39.274313] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:16.287 [2024-11-27 21:41:39.274331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:16.287 [2024-11-27 21:41:39.276462] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:09:16.287 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 75462 00:09:16.287 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.287 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 75462 00:09:16.287 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 75462 00:09:16.287 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:16.287 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:16.287 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:16.287 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.287 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:16.287 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.287 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:16.287 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_AZU4V.txt 00:09:16.287 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:16.287 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:16.287 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:16.287 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:16.288 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:16.288 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:16.288 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:16.288 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:16.288 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:16.288 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:16.288 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:16.288 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:16.288 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:16.288 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:16.288 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:16.288 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:16.288 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:16.288 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:16.288 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:16.288 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_AZU4V.txt 00:09:16.288 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 75439 00:09:16.288 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 75439 ']' 00:09:16.288 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 75439 00:09:16.288 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:09:16.288 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:16.288 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75439 00:09:16.288 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:16.288 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:16.288 killing process with pid 75439 00:09:16.288 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75439' 00:09:16.288 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 75439 00:09:16.288 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 75439 00:09:16.853 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:16.853 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:16.853 00:09:16.853 real 0m3.621s 00:09:16.853 user 0m13.030s 00:09:16.853 sys 0m0.437s 00:09:16.853 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.853 21:41:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:16.853 ************************************ 00:09:16.853 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:16.853 ************************************ 00:09:16.853 21:41:39 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:16.853 21:41:39 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:16.853 21:41:39 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:16.853 21:41:39 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.853 21:41:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:16.853 ************************************ 00:09:16.853 START TEST nvme_fio 00:09:16.853 ************************************ 00:09:16.853 21:41:39 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:09:16.853 21:41:39 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:16.853 21:41:39 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:16.853 21:41:39 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:16.853 21:41:39 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:16.853 21:41:39 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:09:16.853 21:41:39 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:16.853 21:41:39 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:16.853 21:41:39 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:16.853 21:41:39 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:16.853 21:41:39 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:16.853 21:41:39 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:16.853 21:41:39 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:16.853 21:41:39 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:16.853 21:41:39 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:16.853 21:41:39 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:17.111 21:41:40 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:17.111 21:41:40 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:17.111 21:41:40 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:17.111 21:41:40 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:17.111 21:41:40 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:17.111 21:41:40 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:17.111 21:41:40 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:17.111 21:41:40 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:17.111 21:41:40 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:17.111 21:41:40 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:17.111 21:41:40 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:17.111 21:41:40 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:17.111 21:41:40 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:17.111 21:41:40 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:17.111 21:41:40 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:17.111 21:41:40 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:17.111 21:41:40 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:17.111 21:41:40 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:17.111 21:41:40 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:17.111 21:41:40 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:17.369 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:17.369 fio-3.35 00:09:17.369 Starting 1 thread 00:09:21.567 00:09:21.567 test: (groupid=0, jobs=1): err= 0: pid=75591: Wed Nov 27 21:41:44 2024 00:09:21.567 read: IOPS=20.5k, BW=80.0MiB/s (83.9MB/s)(160MiB/2001msec) 00:09:21.567 slat (nsec): min=4217, max=72128, avg=5267.50, stdev=2615.38 00:09:21.567 clat (usec): min=370, max=13091, avg=3111.25, stdev=1110.66 00:09:21.567 lat (usec): min=375, max=13151, avg=3116.51, stdev=1111.89 00:09:21.567 clat percentiles (usec): 00:09:21.567 | 1.00th=[ 1991], 5.00th=[ 2212], 10.00th=[ 2343], 20.00th=[ 2442], 00:09:21.567 | 30.00th=[ 2507], 40.00th=[ 2606], 50.00th=[ 2704], 60.00th=[ 2835], 00:09:21.567 | 70.00th=[ 3032], 80.00th=[ 3490], 90.00th=[ 4817], 95.00th=[ 5735], 00:09:21.567 | 99.00th=[ 7111], 99.50th=[ 7308], 99.90th=[ 8586], 99.95th=[10028], 00:09:21.567 | 99.99th=[12780] 00:09:21.567 bw ( KiB/s): min=76713, max=87648, per=99.92%, avg=81908.75, stdev=4481.86, samples=4 00:09:21.567 iops : min=19178, max=21912, avg=20477.00, stdev=1120.58, samples=4 00:09:21.567 write: IOPS=20.4k, BW=79.9MiB/s (83.7MB/s)(160MiB/2001msec); 0 zone resets 00:09:21.567 slat (nsec): min=4300, max=90374, avg=5396.95, stdev=2607.06 00:09:21.567 clat (usec): min=379, max=12885, avg=3121.26, stdev=1099.71 00:09:21.567 lat (usec): min=384, max=12899, avg=3126.66, stdev=1100.92 00:09:21.567 clat percentiles (usec): 00:09:21.567 | 1.00th=[ 2008], 5.00th=[ 2245], 10.00th=[ 2343], 20.00th=[ 2474], 00:09:21.567 | 30.00th=[ 2540], 40.00th=[ 2606], 50.00th=[ 2704], 60.00th=[ 2835], 00:09:21.567 | 70.00th=[ 3032], 80.00th=[ 3490], 90.00th=[ 4752], 95.00th=[ 5735], 00:09:21.567 | 99.00th=[ 7046], 99.50th=[ 7242], 99.90th=[ 8586], 99.95th=[10290], 00:09:21.567 | 99.99th=[12387] 00:09:21.567 bw ( KiB/s): min=77274, max=87520, per=99.93%, avg=81710.50, stdev=4285.46, samples=4 00:09:21.567 iops : min=19318, max=21880, avg=20427.50, stdev=1071.54, samples=4 00:09:21.567 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.02% 00:09:21.567 lat (msec) : 2=0.96%, 4=83.74%, 10=15.20%, 20=0.05% 00:09:21.567 cpu : usr=99.00%, sys=0.10%, ctx=10, majf=0, minf=624 00:09:21.567 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:21.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.567 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:21.567 issued rwts: total=41006,40905,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:21.567 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:21.567 00:09:21.567 Run status group 0 (all jobs): 00:09:21.567 READ: bw=80.0MiB/s (83.9MB/s), 80.0MiB/s-80.0MiB/s (83.9MB/s-83.9MB/s), io=160MiB (168MB), run=2001-2001msec 00:09:21.567 WRITE: bw=79.9MiB/s (83.7MB/s), 79.9MiB/s-79.9MiB/s (83.7MB/s-83.7MB/s), io=160MiB (168MB), run=2001-2001msec 00:09:21.827 ----------------------------------------------------- 00:09:21.827 Suppressions used: 00:09:21.827 count bytes template 00:09:21.827 1 32 /usr/src/fio/parse.c 00:09:21.827 1 8 libtcmalloc_minimal.so 00:09:21.827 ----------------------------------------------------- 00:09:21.827 00:09:21.827 21:41:44 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:21.827 21:41:44 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:21.827 21:41:44 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:21.827 21:41:44 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:22.090 21:41:45 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:22.091 21:41:45 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:22.351 21:41:45 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:22.351 21:41:45 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:22.351 21:41:45 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:22.351 21:41:45 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:22.351 21:41:45 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:22.351 21:41:45 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:22.351 21:41:45 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:22.351 21:41:45 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:22.351 21:41:45 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:22.351 21:41:45 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:22.351 21:41:45 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:22.351 21:41:45 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:22.351 21:41:45 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:22.351 21:41:45 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:22.351 21:41:45 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:22.351 21:41:45 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:22.351 21:41:45 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:22.351 21:41:45 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:22.351 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:22.351 fio-3.35 00:09:22.351 Starting 1 thread 00:09:28.936 00:09:28.936 test: (groupid=0, jobs=1): err= 0: pid=75646: Wed Nov 27 21:41:51 2024 00:09:28.936 read: IOPS=20.7k, BW=81.0MiB/s (85.0MB/s)(162MiB/2001msec) 00:09:28.936 slat (nsec): min=4201, max=82616, avg=5249.74, stdev=2640.07 00:09:28.936 clat (usec): min=239, max=14953, avg=3067.07, stdev=1104.66 00:09:28.936 lat (usec): min=243, max=15001, avg=3072.32, stdev=1105.89 00:09:28.936 clat percentiles (usec): 00:09:28.936 | 1.00th=[ 1975], 5.00th=[ 2212], 10.00th=[ 2311], 20.00th=[ 2376], 00:09:28.936 | 30.00th=[ 2442], 40.00th=[ 2507], 50.00th=[ 2638], 60.00th=[ 2769], 00:09:28.936 | 70.00th=[ 2999], 80.00th=[ 3523], 90.00th=[ 4752], 95.00th=[ 5538], 00:09:28.936 | 99.00th=[ 6783], 99.50th=[ 7242], 99.90th=[ 8848], 99.95th=[11994], 00:09:28.936 | 99.99th=[14484] 00:09:28.936 bw ( KiB/s): min=78248, max=84016, per=98.57%, avg=81802.67, stdev=3109.15, samples=3 00:09:28.936 iops : min=19562, max=21004, avg=20450.67, stdev=777.29, samples=3 00:09:28.936 write: IOPS=20.7k, BW=80.7MiB/s (84.6MB/s)(162MiB/2001msec); 0 zone resets 00:09:28.936 slat (nsec): min=4271, max=72923, avg=5399.14, stdev=2676.36 00:09:28.936 clat (usec): min=271, max=14568, avg=3093.58, stdev=1112.24 00:09:28.936 lat (usec): min=276, max=14583, avg=3098.98, stdev=1113.49 00:09:28.936 clat percentiles (usec): 00:09:28.936 | 1.00th=[ 2008], 5.00th=[ 2245], 10.00th=[ 2311], 20.00th=[ 2376], 00:09:28.936 | 30.00th=[ 2442], 40.00th=[ 2540], 50.00th=[ 2638], 60.00th=[ 2802], 00:09:28.936 | 70.00th=[ 3032], 80.00th=[ 3589], 90.00th=[ 4817], 95.00th=[ 5604], 00:09:28.936 | 99.00th=[ 6849], 99.50th=[ 7242], 99.90th=[ 9634], 99.95th=[12125], 00:09:28.936 | 99.99th=[14353] 00:09:28.936 bw ( KiB/s): min=78616, max=83944, per=99.13%, avg=81944.00, stdev=2901.65, samples=3 00:09:28.936 iops : min=19654, max=20986, avg=20486.00, stdev=725.41, samples=3 00:09:28.936 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01% 00:09:28.936 lat (msec) : 2=1.03%, 4=82.75%, 10=16.13%, 20=0.07% 00:09:28.936 cpu : usr=98.85%, sys=0.25%, ctx=3, majf=0, minf=625 00:09:28.936 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:28.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.936 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:28.936 issued rwts: total=41517,41353,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:28.936 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:28.936 00:09:28.936 Run status group 0 (all jobs): 00:09:28.937 READ: bw=81.0MiB/s (85.0MB/s), 81.0MiB/s-81.0MiB/s (85.0MB/s-85.0MB/s), io=162MiB (170MB), run=2001-2001msec 00:09:28.937 WRITE: bw=80.7MiB/s (84.6MB/s), 80.7MiB/s-80.7MiB/s (84.6MB/s-84.6MB/s), io=162MiB (169MB), run=2001-2001msec 00:09:28.937 ----------------------------------------------------- 00:09:28.937 Suppressions used: 00:09:28.937 count bytes template 00:09:28.937 1 32 /usr/src/fio/parse.c 00:09:28.937 1 8 libtcmalloc_minimal.so 00:09:28.937 ----------------------------------------------------- 00:09:28.937 00:09:28.937 21:41:52 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:28.937 21:41:52 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:28.937 21:41:52 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:28.937 21:41:52 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:29.198 21:41:52 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:29.198 21:41:52 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:29.459 21:41:52 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:29.459 21:41:52 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:29.459 21:41:52 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:29.459 21:41:52 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:29.459 21:41:52 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:29.459 21:41:52 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:29.459 21:41:52 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:29.459 21:41:52 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:29.459 21:41:52 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:29.459 21:41:52 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:29.459 21:41:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:29.459 21:41:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:29.459 21:41:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:29.459 21:41:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:29.459 21:41:52 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:29.459 21:41:52 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:29.459 21:41:52 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:29.459 21:41:52 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:29.720 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:29.720 fio-3.35 00:09:29.720 Starting 1 thread 00:09:35.014 00:09:35.014 test: (groupid=0, jobs=1): err= 0: pid=75701: Wed Nov 27 21:41:57 2024 00:09:35.014 read: IOPS=14.8k, BW=58.0MiB/s (60.8MB/s)(116MiB/2001msec) 00:09:35.014 slat (usec): min=4, max=169, avg= 6.78, stdev= 4.21 00:09:35.014 clat (usec): min=271, max=13784, avg=4267.40, stdev=1451.88 00:09:35.014 lat (usec): min=276, max=13846, avg=4274.17, stdev=1453.25 00:09:35.014 clat percentiles (usec): 00:09:35.014 | 1.00th=[ 2474], 5.00th=[ 2737], 10.00th=[ 2868], 20.00th=[ 3032], 00:09:35.014 | 30.00th=[ 3195], 40.00th=[ 3392], 50.00th=[ 3752], 60.00th=[ 4293], 00:09:35.014 | 70.00th=[ 5014], 80.00th=[ 5604], 90.00th=[ 6259], 95.00th=[ 6915], 00:09:35.014 | 99.00th=[ 8717], 99.50th=[ 9372], 99.90th=[10290], 99.95th=[10683], 00:09:35.014 | 99.99th=[13698] 00:09:35.014 bw ( KiB/s): min=50584, max=62032, per=95.21%, avg=56501.33, stdev=5733.79, samples=3 00:09:35.014 iops : min=12646, max=15508, avg=14125.33, stdev=1433.45, samples=3 00:09:35.014 write: IOPS=14.8k, BW=58.0MiB/s (60.8MB/s)(116MiB/2001msec); 0 zone resets 00:09:35.014 slat (usec): min=4, max=260, avg= 6.94, stdev= 4.31 00:09:35.014 clat (usec): min=280, max=13212, avg=4330.15, stdev=1477.39 00:09:35.014 lat (usec): min=285, max=13224, avg=4337.09, stdev=1478.71 00:09:35.014 clat percentiles (usec): 00:09:35.014 | 1.00th=[ 2474], 5.00th=[ 2769], 10.00th=[ 2900], 20.00th=[ 3064], 00:09:35.014 | 30.00th=[ 3228], 40.00th=[ 3458], 50.00th=[ 3818], 60.00th=[ 4359], 00:09:35.014 | 70.00th=[ 5080], 80.00th=[ 5669], 90.00th=[ 6325], 95.00th=[ 7046], 00:09:35.014 | 99.00th=[ 8848], 99.50th=[ 9503], 99.90th=[10421], 99.95th=[10683], 00:09:35.014 | 99.99th=[13173] 00:09:35.014 bw ( KiB/s): min=50328, max=61944, per=95.14%, avg=56493.33, stdev=5840.88, samples=3 00:09:35.014 iops : min=12582, max=15486, avg=14123.33, stdev=1460.22, samples=3 00:09:35.014 lat (usec) : 500=0.02%, 750=0.03%, 1000=0.02% 00:09:35.014 lat (msec) : 2=0.25%, 4=54.44%, 10=45.07%, 20=0.18% 00:09:35.014 cpu : usr=97.90%, sys=0.20%, ctx=15, majf=0, minf=625 00:09:35.014 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:35.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.014 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:35.014 issued rwts: total=29688,29703,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:35.014 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:35.014 00:09:35.014 Run status group 0 (all jobs): 00:09:35.014 READ: bw=58.0MiB/s (60.8MB/s), 58.0MiB/s-58.0MiB/s (60.8MB/s-60.8MB/s), io=116MiB (122MB), run=2001-2001msec 00:09:35.014 WRITE: bw=58.0MiB/s (60.8MB/s), 58.0MiB/s-58.0MiB/s (60.8MB/s-60.8MB/s), io=116MiB (122MB), run=2001-2001msec 00:09:35.014 ----------------------------------------------------- 00:09:35.014 Suppressions used: 00:09:35.014 count bytes template 00:09:35.014 1 32 /usr/src/fio/parse.c 00:09:35.014 1 8 libtcmalloc_minimal.so 00:09:35.014 ----------------------------------------------------- 00:09:35.014 00:09:35.014 21:41:57 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:35.014 21:41:57 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:35.014 21:41:57 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:35.014 21:41:57 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:35.014 21:41:58 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:35.014 21:41:58 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:35.276 21:41:58 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:35.276 21:41:58 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:35.276 21:41:58 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:35.276 21:41:58 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:35.276 21:41:58 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:35.276 21:41:58 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:35.276 21:41:58 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:35.276 21:41:58 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:35.276 21:41:58 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:35.276 21:41:58 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:35.276 21:41:58 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:35.276 21:41:58 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:35.276 21:41:58 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:35.276 21:41:58 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:35.276 21:41:58 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:35.276 21:41:58 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:35.276 21:41:58 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:35.276 21:41:58 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:35.537 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:35.537 fio-3.35 00:09:35.537 Starting 1 thread 00:09:39.743 00:09:39.743 test: (groupid=0, jobs=1): err= 0: pid=75762: Wed Nov 27 21:42:02 2024 00:09:39.743 read: IOPS=14.4k, BW=56.2MiB/s (59.0MB/s)(113MiB/2001msec) 00:09:39.743 slat (nsec): min=4876, max=98019, avg=7067.55, stdev=4318.70 00:09:39.743 clat (usec): min=301, max=13973, avg=4415.32, stdev=1550.94 00:09:39.743 lat (usec): min=307, max=14021, avg=4422.39, stdev=1552.27 00:09:39.743 clat percentiles (usec): 00:09:39.743 | 1.00th=[ 2376], 5.00th=[ 2671], 10.00th=[ 2802], 20.00th=[ 3032], 00:09:39.743 | 30.00th=[ 3228], 40.00th=[ 3490], 50.00th=[ 3982], 60.00th=[ 4621], 00:09:39.743 | 70.00th=[ 5211], 80.00th=[ 5866], 90.00th=[ 6652], 95.00th=[ 7242], 00:09:39.743 | 99.00th=[ 8455], 99.50th=[ 8979], 99.90th=[10945], 99.95th=[11994], 00:09:39.743 | 99.99th=[13960] 00:09:39.743 bw ( KiB/s): min=52648, max=61216, per=98.64%, avg=56800.00, stdev=4290.10, samples=3 00:09:39.743 iops : min=13162, max=15304, avg=14200.00, stdev=1072.52, samples=3 00:09:39.743 write: IOPS=14.4k, BW=56.3MiB/s (59.1MB/s)(113MiB/2001msec); 0 zone resets 00:09:39.743 slat (usec): min=5, max=155, avg= 7.28, stdev= 4.56 00:09:39.743 clat (usec): min=312, max=13853, avg=4441.56, stdev=1551.47 00:09:39.743 lat (usec): min=319, max=13911, avg=4448.85, stdev=1552.92 00:09:39.743 clat percentiles (usec): 00:09:39.743 | 1.00th=[ 2409], 5.00th=[ 2704], 10.00th=[ 2835], 20.00th=[ 3032], 00:09:39.743 | 30.00th=[ 3261], 40.00th=[ 3490], 50.00th=[ 3982], 60.00th=[ 4686], 00:09:39.743 | 70.00th=[ 5276], 80.00th=[ 5932], 90.00th=[ 6652], 95.00th=[ 7308], 00:09:39.743 | 99.00th=[ 8455], 99.50th=[ 8979], 99.90th=[11076], 99.95th=[12125], 00:09:39.743 | 99.99th=[13698] 00:09:39.743 bw ( KiB/s): min=53240, max=61600, per=98.48%, avg=56792.00, stdev=4319.21, samples=3 00:09:39.743 iops : min=13310, max=15400, avg=14198.00, stdev=1079.80, samples=3 00:09:39.743 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:09:39.743 lat (msec) : 2=0.27%, 4=50.23%, 10=49.24%, 20=0.22% 00:09:39.743 cpu : usr=98.25%, sys=0.20%, ctx=4, majf=0, minf=623 00:09:39.743 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:39.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.743 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:39.743 issued rwts: total=28807,28848,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.743 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:39.743 00:09:39.743 Run status group 0 (all jobs): 00:09:39.743 READ: bw=56.2MiB/s (59.0MB/s), 56.2MiB/s-56.2MiB/s (59.0MB/s-59.0MB/s), io=113MiB (118MB), run=2001-2001msec 00:09:39.743 WRITE: bw=56.3MiB/s (59.1MB/s), 56.3MiB/s-56.3MiB/s (59.1MB/s-59.1MB/s), io=113MiB (118MB), run=2001-2001msec 00:09:40.005 ----------------------------------------------------- 00:09:40.005 Suppressions used: 00:09:40.005 count bytes template 00:09:40.005 1 32 /usr/src/fio/parse.c 00:09:40.005 1 8 libtcmalloc_minimal.so 00:09:40.005 ----------------------------------------------------- 00:09:40.005 00:09:40.005 21:42:03 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:40.005 21:42:03 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:40.005 00:09:40.005 real 0m23.310s 00:09:40.005 user 0m16.779s 00:09:40.005 sys 0m9.778s 00:09:40.005 ************************************ 00:09:40.005 END TEST nvme_fio 00:09:40.005 ************************************ 00:09:40.005 21:42:03 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.005 21:42:03 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:40.005 00:09:40.005 real 1m30.602s 00:09:40.005 user 3m31.819s 00:09:40.005 sys 0m19.498s 00:09:40.005 21:42:03 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.005 ************************************ 00:09:40.005 END TEST nvme 00:09:40.005 ************************************ 00:09:40.005 21:42:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:40.268 21:42:03 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:40.268 21:42:03 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:40.268 21:42:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:40.268 21:42:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.268 21:42:03 -- common/autotest_common.sh@10 -- # set +x 00:09:40.268 ************************************ 00:09:40.268 START TEST nvme_scc 00:09:40.268 ************************************ 00:09:40.268 21:42:03 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:40.268 * Looking for test storage... 00:09:40.268 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:40.268 21:42:03 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:40.268 21:42:03 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:40.268 21:42:03 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:40.268 21:42:03 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:40.268 21:42:03 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:40.268 21:42:03 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:40.268 21:42:03 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:40.268 21:42:03 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:40.268 21:42:03 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:40.268 21:42:03 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:40.268 21:42:03 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:40.268 21:42:03 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:40.268 21:42:03 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:40.268 21:42:03 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:40.268 21:42:03 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:40.268 21:42:03 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:40.268 21:42:03 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:40.268 21:42:03 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:40.268 21:42:03 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:40.268 21:42:03 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:40.268 21:42:03 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:40.268 21:42:03 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:40.268 21:42:03 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:40.268 21:42:03 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:40.268 21:42:03 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:40.268 21:42:03 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:40.268 21:42:03 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:40.268 21:42:03 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:40.268 21:42:03 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:40.268 21:42:03 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:40.268 21:42:03 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:40.268 21:42:03 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:40.268 21:42:03 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:40.268 21:42:03 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:40.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.269 --rc genhtml_branch_coverage=1 00:09:40.269 --rc genhtml_function_coverage=1 00:09:40.269 --rc genhtml_legend=1 00:09:40.269 --rc geninfo_all_blocks=1 00:09:40.269 --rc geninfo_unexecuted_blocks=1 00:09:40.269 00:09:40.269 ' 00:09:40.269 21:42:03 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:40.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.269 --rc genhtml_branch_coverage=1 00:09:40.269 --rc genhtml_function_coverage=1 00:09:40.269 --rc genhtml_legend=1 00:09:40.269 --rc geninfo_all_blocks=1 00:09:40.269 --rc geninfo_unexecuted_blocks=1 00:09:40.269 00:09:40.269 ' 00:09:40.269 21:42:03 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:40.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.269 --rc genhtml_branch_coverage=1 00:09:40.269 --rc genhtml_function_coverage=1 00:09:40.269 --rc genhtml_legend=1 00:09:40.269 --rc geninfo_all_blocks=1 00:09:40.269 --rc geninfo_unexecuted_blocks=1 00:09:40.269 00:09:40.269 ' 00:09:40.269 21:42:03 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:40.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.269 --rc genhtml_branch_coverage=1 00:09:40.269 --rc genhtml_function_coverage=1 00:09:40.269 --rc genhtml_legend=1 00:09:40.269 --rc geninfo_all_blocks=1 00:09:40.269 --rc geninfo_unexecuted_blocks=1 00:09:40.269 00:09:40.269 ' 00:09:40.269 21:42:03 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:40.269 21:42:03 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:40.269 21:42:03 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:40.269 21:42:03 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:40.269 21:42:03 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:40.269 21:42:03 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:40.269 21:42:03 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:40.269 21:42:03 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:40.269 21:42:03 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:40.269 21:42:03 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.269 21:42:03 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.269 21:42:03 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.269 21:42:03 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:40.269 21:42:03 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.269 21:42:03 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:40.269 21:42:03 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:40.269 21:42:03 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:40.269 21:42:03 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:40.269 21:42:03 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:40.269 21:42:03 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:40.269 21:42:03 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:40.269 21:42:03 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:40.269 21:42:03 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:40.269 21:42:03 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:40.269 21:42:03 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:40.269 21:42:03 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:40.269 21:42:03 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:40.269 21:42:03 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:40.531 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:40.793 Waiting for block devices as requested 00:09:40.793 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:41.056 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:41.056 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:41.056 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:46.358 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:46.358 21:42:09 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:46.358 21:42:09 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:46.358 21:42:09 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:46.358 21:42:09 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:46.358 21:42:09 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:46.358 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.359 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:46.360 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.361 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.362 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:46.363 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:46.364 21:42:09 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:46.364 21:42:09 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:46.364 21:42:09 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:46.364 21:42:09 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.364 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.365 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.366 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.367 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:46.368 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.369 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:46.370 21:42:09 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:46.370 21:42:09 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:46.370 21:42:09 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:46.370 21:42:09 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:46.370 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.371 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.372 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.373 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:46.374 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.375 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.376 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.644 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:46.645 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.646 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.647 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:46.648 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:46.649 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:46.650 21:42:09 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:46.650 21:42:09 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:46.650 21:42:09 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:46.650 21:42:09 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.650 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:46.651 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.652 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:46.653 21:42:09 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:46.653 21:42:09 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:46.654 21:42:09 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:46.654 21:42:09 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:46.654 21:42:09 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:46.654 21:42:09 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:46.654 21:42:09 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:46.654 21:42:09 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:46.654 21:42:09 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:46.654 21:42:09 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:46.654 21:42:09 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:46.654 21:42:09 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:46.654 21:42:09 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:46.654 21:42:09 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:46.654 21:42:09 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:46.654 21:42:09 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:46.654 21:42:09 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:46.654 21:42:09 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:46.654 21:42:09 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:46.654 21:42:09 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:46.654 21:42:09 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:46.654 21:42:09 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:46.654 21:42:09 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:46.654 21:42:09 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:46.654 21:42:09 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:46.654 21:42:09 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:46.654 21:42:09 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:46.654 21:42:09 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:46.654 21:42:09 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:46.654 21:42:09 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:46.654 21:42:09 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:46.654 21:42:09 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:46.654 21:42:09 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:46.654 21:42:09 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:46.654 21:42:09 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:46.654 21:42:09 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:46.654 21:42:09 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:47.224 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:47.796 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:47.796 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:47.797 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:47.797 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:47.797 21:42:10 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:47.797 21:42:10 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:47.797 21:42:10 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.797 21:42:10 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:47.797 ************************************ 00:09:47.797 START TEST nvme_simple_copy 00:09:47.797 ************************************ 00:09:47.797 21:42:10 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:48.059 Initializing NVMe Controllers 00:09:48.059 Attaching to 0000:00:10.0 00:09:48.059 Controller supports SCC. Attached to 0000:00:10.0 00:09:48.059 Namespace ID: 1 size: 6GB 00:09:48.059 Initialization complete. 00:09:48.059 00:09:48.059 Controller QEMU NVMe Ctrl (12340 ) 00:09:48.059 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:48.059 Namespace Block Size:4096 00:09:48.059 Writing LBAs 0 to 63 with Random Data 00:09:48.059 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:48.059 LBAs matching Written Data: 64 00:09:48.059 00:09:48.059 real 0m0.259s 00:09:48.059 user 0m0.098s 00:09:48.059 sys 0m0.057s 00:09:48.059 21:42:11 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:48.059 ************************************ 00:09:48.059 END TEST nvme_simple_copy 00:09:48.059 ************************************ 00:09:48.059 21:42:11 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:48.059 ************************************ 00:09:48.059 END TEST nvme_scc 00:09:48.059 ************************************ 00:09:48.059 00:09:48.059 real 0m7.913s 00:09:48.059 user 0m1.117s 00:09:48.059 sys 0m1.532s 00:09:48.059 21:42:11 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:48.059 21:42:11 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:48.059 21:42:11 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:48.059 21:42:11 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:48.059 21:42:11 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:48.059 21:42:11 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:48.059 21:42:11 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:48.059 21:42:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:48.059 21:42:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:48.059 21:42:11 -- common/autotest_common.sh@10 -- # set +x 00:09:48.059 ************************************ 00:09:48.059 START TEST nvme_fdp 00:09:48.059 ************************************ 00:09:48.059 21:42:11 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:48.322 * Looking for test storage... 00:09:48.322 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:48.322 21:42:11 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:48.322 21:42:11 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:48.322 21:42:11 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:48.322 21:42:11 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:48.322 21:42:11 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:48.322 21:42:11 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:48.322 21:42:11 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:48.322 21:42:11 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:48.322 21:42:11 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:48.322 21:42:11 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:48.322 21:42:11 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:48.322 21:42:11 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:48.322 21:42:11 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:48.322 21:42:11 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:48.322 21:42:11 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:48.322 21:42:11 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:48.322 21:42:11 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:48.322 21:42:11 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:48.322 21:42:11 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:48.322 21:42:11 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:48.322 21:42:11 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:48.322 21:42:11 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:48.322 21:42:11 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:48.322 21:42:11 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:48.322 21:42:11 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:48.322 21:42:11 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:48.322 21:42:11 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:48.322 21:42:11 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:48.322 21:42:11 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:48.322 21:42:11 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:48.322 21:42:11 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:48.322 21:42:11 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:48.322 21:42:11 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:48.322 21:42:11 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:48.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.322 --rc genhtml_branch_coverage=1 00:09:48.322 --rc genhtml_function_coverage=1 00:09:48.322 --rc genhtml_legend=1 00:09:48.322 --rc geninfo_all_blocks=1 00:09:48.322 --rc geninfo_unexecuted_blocks=1 00:09:48.322 00:09:48.322 ' 00:09:48.322 21:42:11 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:48.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.322 --rc genhtml_branch_coverage=1 00:09:48.322 --rc genhtml_function_coverage=1 00:09:48.322 --rc genhtml_legend=1 00:09:48.322 --rc geninfo_all_blocks=1 00:09:48.322 --rc geninfo_unexecuted_blocks=1 00:09:48.322 00:09:48.322 ' 00:09:48.322 21:42:11 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:48.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.322 --rc genhtml_branch_coverage=1 00:09:48.322 --rc genhtml_function_coverage=1 00:09:48.322 --rc genhtml_legend=1 00:09:48.322 --rc geninfo_all_blocks=1 00:09:48.322 --rc geninfo_unexecuted_blocks=1 00:09:48.322 00:09:48.322 ' 00:09:48.322 21:42:11 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:48.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.322 --rc genhtml_branch_coverage=1 00:09:48.322 --rc genhtml_function_coverage=1 00:09:48.322 --rc genhtml_legend=1 00:09:48.322 --rc geninfo_all_blocks=1 00:09:48.322 --rc geninfo_unexecuted_blocks=1 00:09:48.322 00:09:48.322 ' 00:09:48.322 21:42:11 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:48.322 21:42:11 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:48.322 21:42:11 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:48.322 21:42:11 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:48.322 21:42:11 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:48.322 21:42:11 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:48.322 21:42:11 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:48.322 21:42:11 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:48.322 21:42:11 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:48.322 21:42:11 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.322 21:42:11 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.322 21:42:11 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.322 21:42:11 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:48.322 21:42:11 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.322 21:42:11 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:48.322 21:42:11 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:48.323 21:42:11 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:48.323 21:42:11 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:48.323 21:42:11 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:48.323 21:42:11 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:48.323 21:42:11 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:48.323 21:42:11 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:48.323 21:42:11 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:48.323 21:42:11 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:48.323 21:42:11 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:48.584 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:48.846 Waiting for block devices as requested 00:09:48.846 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:48.846 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:49.107 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:49.107 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:54.437 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:54.437 21:42:17 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:54.437 21:42:17 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:54.437 21:42:17 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:54.437 21:42:17 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:54.437 21:42:17 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:54.437 21:42:17 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:54.437 21:42:17 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:54.437 21:42:17 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:54.437 21:42:17 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:54.437 21:42:17 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:54.437 21:42:17 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:54.437 21:42:17 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:54.437 21:42:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:54.437 21:42:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:54.437 21:42:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:54.437 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.438 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:54.439 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.440 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.441 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.442 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:54.443 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:54.444 21:42:17 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:54.444 21:42:17 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:54.444 21:42:17 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:54.444 21:42:17 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.444 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:54.445 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:54.446 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.447 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.448 21:42:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:54.449 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:54.450 21:42:17 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:54.450 21:42:17 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:54.450 21:42:17 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:54.450 21:42:17 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:54.450 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.451 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.452 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:54.453 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:54.454 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.455 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:54.456 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:54.457 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:54.458 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.459 21:42:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:54.460 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:54.461 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:54.462 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:54.463 21:42:17 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:54.463 21:42:17 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:54.463 21:42:17 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:54.463 21:42:17 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.463 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:54.464 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:54.465 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:54.466 21:42:17 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:54.466 21:42:17 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:54.466 21:42:17 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:54.466 21:42:17 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:54.466 21:42:17 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:55.038 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:55.610 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:55.610 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:55.610 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:55.610 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:55.610 21:42:18 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:55.610 21:42:18 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:55.610 21:42:18 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.610 21:42:18 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:55.610 ************************************ 00:09:55.610 START TEST nvme_flexible_data_placement 00:09:55.610 ************************************ 00:09:55.610 21:42:18 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:55.872 Initializing NVMe Controllers 00:09:55.872 Attaching to 0000:00:13.0 00:09:55.872 Controller supports FDP Attached to 0000:00:13.0 00:09:55.872 Namespace ID: 1 Endurance Group ID: 1 00:09:55.872 Initialization complete. 00:09:55.872 00:09:55.872 ================================== 00:09:55.872 == FDP tests for Namespace: #01 == 00:09:55.872 ================================== 00:09:55.872 00:09:55.872 Get Feature: FDP: 00:09:55.872 ================= 00:09:55.872 Enabled: Yes 00:09:55.872 FDP configuration Index: 0 00:09:55.872 00:09:55.872 FDP configurations log page 00:09:55.872 =========================== 00:09:55.872 Number of FDP configurations: 1 00:09:55.872 Version: 0 00:09:55.872 Size: 112 00:09:55.872 FDP Configuration Descriptor: 0 00:09:55.872 Descriptor Size: 96 00:09:55.872 Reclaim Group Identifier format: 2 00:09:55.872 FDP Volatile Write Cache: Not Present 00:09:55.872 FDP Configuration: Valid 00:09:55.872 Vendor Specific Size: 0 00:09:55.872 Number of Reclaim Groups: 2 00:09:55.872 Number of Recalim Unit Handles: 8 00:09:55.872 Max Placement Identifiers: 128 00:09:55.872 Number of Namespaces Suppprted: 256 00:09:55.872 Reclaim unit Nominal Size: 6000000 bytes 00:09:55.872 Estimated Reclaim Unit Time Limit: Not Reported 00:09:55.872 RUH Desc #000: RUH Type: Initially Isolated 00:09:55.872 RUH Desc #001: RUH Type: Initially Isolated 00:09:55.872 RUH Desc #002: RUH Type: Initially Isolated 00:09:55.872 RUH Desc #003: RUH Type: Initially Isolated 00:09:55.872 RUH Desc #004: RUH Type: Initially Isolated 00:09:55.872 RUH Desc #005: RUH Type: Initially Isolated 00:09:55.872 RUH Desc #006: RUH Type: Initially Isolated 00:09:55.872 RUH Desc #007: RUH Type: Initially Isolated 00:09:55.872 00:09:55.872 FDP reclaim unit handle usage log page 00:09:55.872 ====================================== 00:09:55.872 Number of Reclaim Unit Handles: 8 00:09:55.872 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:55.872 RUH Usage Desc #001: RUH Attributes: Unused 00:09:55.872 RUH Usage Desc #002: RUH Attributes: Unused 00:09:55.872 RUH Usage Desc #003: RUH Attributes: Unused 00:09:55.872 RUH Usage Desc #004: RUH Attributes: Unused 00:09:55.872 RUH Usage Desc #005: RUH Attributes: Unused 00:09:55.872 RUH Usage Desc #006: RUH Attributes: Unused 00:09:55.872 RUH Usage Desc #007: RUH Attributes: Unused 00:09:55.872 00:09:55.872 FDP statistics log page 00:09:55.872 ======================= 00:09:55.872 Host bytes with metadata written: 2131734528 00:09:55.872 Media bytes with metadata written: 2133012480 00:09:55.872 Media bytes erased: 0 00:09:55.872 00:09:55.872 FDP Reclaim unit handle status 00:09:55.872 ============================== 00:09:55.872 Number of RUHS descriptors: 2 00:09:55.872 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000004f05 00:09:55.872 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:55.872 00:09:55.872 FDP write on placement id: 0 success 00:09:55.872 00:09:55.872 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:55.872 00:09:55.872 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:55.872 00:09:55.872 Get Feature: FDP Events for Placement handle: #0 00:09:55.872 ======================== 00:09:55.872 Number of FDP Events: 6 00:09:55.872 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:55.872 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:55.872 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:55.872 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:55.872 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:55.872 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:55.872 00:09:55.872 FDP events log page 00:09:55.872 =================== 00:09:55.872 Number of FDP events: 1 00:09:55.872 FDP Event #0: 00:09:55.872 Event Type: RU Not Written to Capacity 00:09:55.872 Placement Identifier: Valid 00:09:55.872 NSID: Valid 00:09:55.872 Location: Valid 00:09:55.872 Placement Identifier: 0 00:09:55.872 Event Timestamp: 6 00:09:55.872 Namespace Identifier: 1 00:09:55.872 Reclaim Group Identifier: 0 00:09:55.872 Reclaim Unit Handle Identifier: 0 00:09:55.872 00:09:55.872 FDP test passed 00:09:55.872 00:09:55.872 real 0m0.219s 00:09:55.872 user 0m0.065s 00:09:55.872 sys 0m0.053s 00:09:55.872 21:42:18 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.872 ************************************ 00:09:55.872 END TEST nvme_flexible_data_placement 00:09:55.872 ************************************ 00:09:55.872 21:42:18 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:56.134 00:09:56.134 real 0m7.853s 00:09:56.134 user 0m1.122s 00:09:56.134 sys 0m1.500s 00:09:56.134 21:42:19 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.134 ************************************ 00:09:56.134 END TEST nvme_fdp 00:09:56.134 ************************************ 00:09:56.134 21:42:19 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:56.134 21:42:19 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:56.134 21:42:19 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:56.134 21:42:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:56.134 21:42:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.134 21:42:19 -- common/autotest_common.sh@10 -- # set +x 00:09:56.134 ************************************ 00:09:56.134 START TEST nvme_rpc 00:09:56.134 ************************************ 00:09:56.134 21:42:19 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:56.134 * Looking for test storage... 00:09:56.134 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:56.134 21:42:19 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:56.134 21:42:19 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:56.134 21:42:19 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:56.134 21:42:19 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:56.134 21:42:19 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:56.134 21:42:19 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:56.134 21:42:19 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:56.134 21:42:19 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:56.134 21:42:19 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:56.134 21:42:19 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:56.134 21:42:19 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:56.134 21:42:19 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:56.134 21:42:19 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:56.134 21:42:19 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:56.134 21:42:19 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:56.134 21:42:19 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:56.134 21:42:19 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:56.134 21:42:19 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:56.134 21:42:19 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:56.134 21:42:19 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:56.134 21:42:19 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:56.134 21:42:19 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:56.134 21:42:19 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:56.134 21:42:19 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:56.134 21:42:19 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:56.134 21:42:19 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:56.134 21:42:19 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:56.134 21:42:19 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:56.134 21:42:19 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:56.134 21:42:19 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:56.134 21:42:19 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:56.134 21:42:19 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:56.134 21:42:19 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:56.134 21:42:19 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:56.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.134 --rc genhtml_branch_coverage=1 00:09:56.134 --rc genhtml_function_coverage=1 00:09:56.134 --rc genhtml_legend=1 00:09:56.134 --rc geninfo_all_blocks=1 00:09:56.134 --rc geninfo_unexecuted_blocks=1 00:09:56.135 00:09:56.135 ' 00:09:56.135 21:42:19 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:56.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.135 --rc genhtml_branch_coverage=1 00:09:56.135 --rc genhtml_function_coverage=1 00:09:56.135 --rc genhtml_legend=1 00:09:56.135 --rc geninfo_all_blocks=1 00:09:56.135 --rc geninfo_unexecuted_blocks=1 00:09:56.135 00:09:56.135 ' 00:09:56.135 21:42:19 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:56.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.135 --rc genhtml_branch_coverage=1 00:09:56.135 --rc genhtml_function_coverage=1 00:09:56.135 --rc genhtml_legend=1 00:09:56.135 --rc geninfo_all_blocks=1 00:09:56.135 --rc geninfo_unexecuted_blocks=1 00:09:56.135 00:09:56.135 ' 00:09:56.135 21:42:19 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:56.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.135 --rc genhtml_branch_coverage=1 00:09:56.135 --rc genhtml_function_coverage=1 00:09:56.135 --rc genhtml_legend=1 00:09:56.135 --rc geninfo_all_blocks=1 00:09:56.135 --rc geninfo_unexecuted_blocks=1 00:09:56.135 00:09:56.135 ' 00:09:56.135 21:42:19 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:56.135 21:42:19 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:56.135 21:42:19 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:56.135 21:42:19 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:56.135 21:42:19 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:56.135 21:42:19 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:56.135 21:42:19 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:56.135 21:42:19 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:56.135 21:42:19 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:56.135 21:42:19 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:56.135 21:42:19 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:56.395 21:42:19 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:56.395 21:42:19 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:56.395 21:42:19 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:56.395 21:42:19 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:56.395 21:42:19 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=77151 00:09:56.395 21:42:19 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:56.395 21:42:19 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 77151 00:09:56.395 21:42:19 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 77151 ']' 00:09:56.395 21:42:19 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.395 21:42:19 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:56.395 21:42:19 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.395 21:42:19 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:56.395 21:42:19 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.395 21:42:19 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:56.395 [2024-11-27 21:42:19.367356] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:09:56.395 [2024-11-27 21:42:19.367506] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77151 ] 00:09:56.657 [2024-11-27 21:42:19.517436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:56.657 [2024-11-27 21:42:19.548018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.657 [2024-11-27 21:42:19.548075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.230 21:42:20 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:57.230 21:42:20 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:57.230 21:42:20 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:57.491 Nvme0n1 00:09:57.491 21:42:20 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:57.491 21:42:20 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:57.779 request: 00:09:57.779 { 00:09:57.779 "bdev_name": "Nvme0n1", 00:09:57.779 "filename": "non_existing_file", 00:09:57.779 "method": "bdev_nvme_apply_firmware", 00:09:57.779 "req_id": 1 00:09:57.779 } 00:09:57.779 Got JSON-RPC error response 00:09:57.779 response: 00:09:57.779 { 00:09:57.779 "code": -32603, 00:09:57.779 "message": "open file failed." 00:09:57.779 } 00:09:57.779 21:42:20 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:57.779 21:42:20 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:57.779 21:42:20 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:58.058 21:42:20 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:58.058 21:42:20 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 77151 00:09:58.058 21:42:20 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 77151 ']' 00:09:58.058 21:42:20 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 77151 00:09:58.058 21:42:20 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:58.058 21:42:20 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:58.058 21:42:20 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77151 00:09:58.058 21:42:20 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:58.058 killing process with pid 77151 00:09:58.058 21:42:20 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:58.058 21:42:20 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77151' 00:09:58.058 21:42:20 nvme_rpc -- common/autotest_common.sh@973 -- # kill 77151 00:09:58.058 21:42:20 nvme_rpc -- common/autotest_common.sh@978 -- # wait 77151 00:09:58.320 00:09:58.320 real 0m2.167s 00:09:58.320 user 0m4.175s 00:09:58.320 sys 0m0.550s 00:09:58.320 21:42:21 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.320 21:42:21 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:58.320 ************************************ 00:09:58.320 END TEST nvme_rpc 00:09:58.320 ************************************ 00:09:58.320 21:42:21 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:58.320 21:42:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:58.320 21:42:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.320 21:42:21 -- common/autotest_common.sh@10 -- # set +x 00:09:58.320 ************************************ 00:09:58.320 START TEST nvme_rpc_timeouts 00:09:58.320 ************************************ 00:09:58.320 21:42:21 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:58.320 * Looking for test storage... 00:09:58.320 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:58.320 21:42:21 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:58.320 21:42:21 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:09:58.320 21:42:21 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:58.320 21:42:21 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:58.320 21:42:21 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:58.320 21:42:21 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:58.320 21:42:21 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:58.320 21:42:21 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:58.320 21:42:21 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:58.320 21:42:21 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:58.320 21:42:21 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:58.320 21:42:21 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:58.320 21:42:21 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:58.320 21:42:21 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:58.320 21:42:21 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:58.320 21:42:21 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:58.320 21:42:21 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:58.320 21:42:21 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:58.320 21:42:21 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:58.320 21:42:21 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:58.320 21:42:21 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:58.320 21:42:21 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:58.321 21:42:21 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:58.321 21:42:21 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:58.321 21:42:21 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:58.583 21:42:21 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:58.583 21:42:21 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:58.583 21:42:21 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:58.583 21:42:21 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:58.583 21:42:21 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:58.583 21:42:21 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:58.583 21:42:21 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:58.583 21:42:21 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:58.583 21:42:21 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:58.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.583 --rc genhtml_branch_coverage=1 00:09:58.583 --rc genhtml_function_coverage=1 00:09:58.583 --rc genhtml_legend=1 00:09:58.583 --rc geninfo_all_blocks=1 00:09:58.583 --rc geninfo_unexecuted_blocks=1 00:09:58.583 00:09:58.583 ' 00:09:58.583 21:42:21 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:58.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.583 --rc genhtml_branch_coverage=1 00:09:58.583 --rc genhtml_function_coverage=1 00:09:58.583 --rc genhtml_legend=1 00:09:58.583 --rc geninfo_all_blocks=1 00:09:58.583 --rc geninfo_unexecuted_blocks=1 00:09:58.583 00:09:58.583 ' 00:09:58.583 21:42:21 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:58.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.583 --rc genhtml_branch_coverage=1 00:09:58.583 --rc genhtml_function_coverage=1 00:09:58.583 --rc genhtml_legend=1 00:09:58.583 --rc geninfo_all_blocks=1 00:09:58.583 --rc geninfo_unexecuted_blocks=1 00:09:58.583 00:09:58.583 ' 00:09:58.583 21:42:21 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:58.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.583 --rc genhtml_branch_coverage=1 00:09:58.583 --rc genhtml_function_coverage=1 00:09:58.583 --rc genhtml_legend=1 00:09:58.583 --rc geninfo_all_blocks=1 00:09:58.583 --rc geninfo_unexecuted_blocks=1 00:09:58.583 00:09:58.583 ' 00:09:58.583 21:42:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:58.583 21:42:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_77205 00:09:58.583 21:42:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_77205 00:09:58.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.583 21:42:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=77237 00:09:58.583 21:42:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:58.583 21:42:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 77237 00:09:58.583 21:42:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:58.583 21:42:21 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 77237 ']' 00:09:58.583 21:42:21 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.583 21:42:21 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:58.583 21:42:21 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.583 21:42:21 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:58.583 21:42:21 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:58.583 [2024-11-27 21:42:21.525273] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:09:58.583 [2024-11-27 21:42:21.525434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77237 ] 00:09:58.583 [2024-11-27 21:42:21.675116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:58.845 [2024-11-27 21:42:21.705882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:58.845 [2024-11-27 21:42:21.705951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.417 21:42:22 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:59.417 21:42:22 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:59.417 Checking default timeout settings: 00:09:59.417 21:42:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:59.418 21:42:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:59.680 Making settings changes with rpc: 00:09:59.680 21:42:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:59.680 21:42:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:59.942 Check default vs. modified settings: 00:09:59.942 21:42:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:59.942 21:42:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:00.205 21:42:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:10:00.205 21:42:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:00.205 21:42:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_77205 00:10:00.205 21:42:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:00.205 21:42:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:00.205 21:42:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:10:00.205 21:42:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:00.205 21:42:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:00.205 21:42:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_77205 00:10:00.205 21:42:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:10:00.205 21:42:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:10:00.205 21:42:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:10:00.205 Setting action_on_timeout is changed as expected. 00:10:00.205 21:42:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:00.205 21:42:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_77205 00:10:00.205 21:42:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:00.205 21:42:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:00.205 21:42:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:00.205 21:42:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_77205 00:10:00.205 21:42:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:00.205 21:42:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:00.205 Setting timeout_us is changed as expected. 00:10:00.205 21:42:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:10:00.205 21:42:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:10:00.205 21:42:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:10:00.205 21:42:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:00.205 21:42:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:00.205 21:42:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_77205 00:10:00.205 21:42:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:00.205 21:42:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:00.205 21:42:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_77205 00:10:00.205 21:42:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:00.205 21:42:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:00.205 21:42:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:10:00.205 21:42:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:10:00.205 Setting timeout_admin_us is changed as expected. 00:10:00.205 21:42:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:10:00.205 21:42:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:10:00.205 21:42:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_77205 /tmp/settings_modified_77205 00:10:00.205 21:42:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 77237 00:10:00.205 21:42:23 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 77237 ']' 00:10:00.205 21:42:23 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 77237 00:10:00.205 21:42:23 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:10:00.205 21:42:23 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:00.205 21:42:23 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77237 00:10:00.466 21:42:23 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:00.466 21:42:23 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:00.466 killing process with pid 77237 00:10:00.466 21:42:23 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77237' 00:10:00.466 21:42:23 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 77237 00:10:00.466 21:42:23 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 77237 00:10:00.728 RPC TIMEOUT SETTING TEST PASSED. 00:10:00.728 21:42:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:10:00.728 00:10:00.728 real 0m2.333s 00:10:00.728 user 0m4.622s 00:10:00.728 sys 0m0.577s 00:10:00.728 21:42:23 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:00.728 ************************************ 00:10:00.728 END TEST nvme_rpc_timeouts 00:10:00.728 ************************************ 00:10:00.728 21:42:23 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:00.728 21:42:23 -- spdk/autotest.sh@239 -- # uname -s 00:10:00.728 21:42:23 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:10:00.728 21:42:23 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:00.728 21:42:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:00.728 21:42:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:00.728 21:42:23 -- common/autotest_common.sh@10 -- # set +x 00:10:00.728 ************************************ 00:10:00.728 START TEST sw_hotplug 00:10:00.728 ************************************ 00:10:00.728 21:42:23 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:00.728 * Looking for test storage... 00:10:00.728 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:00.728 21:42:23 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:00.728 21:42:23 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:10:00.728 21:42:23 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:00.728 21:42:23 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:00.728 21:42:23 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:00.728 21:42:23 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:00.728 21:42:23 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:00.728 21:42:23 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:10:00.728 21:42:23 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:10:00.728 21:42:23 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:10:00.728 21:42:23 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:10:00.728 21:42:23 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:10:00.728 21:42:23 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:10:00.728 21:42:23 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:10:00.728 21:42:23 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:00.728 21:42:23 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:10:00.728 21:42:23 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:10:00.728 21:42:23 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:00.728 21:42:23 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:00.728 21:42:23 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:10:00.728 21:42:23 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:10:00.728 21:42:23 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:00.728 21:42:23 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:10:00.728 21:42:23 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:10:00.728 21:42:23 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:10:00.728 21:42:23 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:10:00.728 21:42:23 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:00.728 21:42:23 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:10:00.728 21:42:23 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:10:00.728 21:42:23 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:00.728 21:42:23 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:00.728 21:42:23 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:10:00.728 21:42:23 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:00.728 21:42:23 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:00.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.728 --rc genhtml_branch_coverage=1 00:10:00.728 --rc genhtml_function_coverage=1 00:10:00.728 --rc genhtml_legend=1 00:10:00.728 --rc geninfo_all_blocks=1 00:10:00.728 --rc geninfo_unexecuted_blocks=1 00:10:00.728 00:10:00.728 ' 00:10:00.728 21:42:23 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:00.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.728 --rc genhtml_branch_coverage=1 00:10:00.728 --rc genhtml_function_coverage=1 00:10:00.728 --rc genhtml_legend=1 00:10:00.728 --rc geninfo_all_blocks=1 00:10:00.728 --rc geninfo_unexecuted_blocks=1 00:10:00.728 00:10:00.728 ' 00:10:00.728 21:42:23 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:00.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.728 --rc genhtml_branch_coverage=1 00:10:00.728 --rc genhtml_function_coverage=1 00:10:00.728 --rc genhtml_legend=1 00:10:00.728 --rc geninfo_all_blocks=1 00:10:00.728 --rc geninfo_unexecuted_blocks=1 00:10:00.728 00:10:00.728 ' 00:10:00.728 21:42:23 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:00.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.728 --rc genhtml_branch_coverage=1 00:10:00.728 --rc genhtml_function_coverage=1 00:10:00.728 --rc genhtml_legend=1 00:10:00.728 --rc geninfo_all_blocks=1 00:10:00.728 --rc geninfo_unexecuted_blocks=1 00:10:00.728 00:10:00.728 ' 00:10:00.728 21:42:23 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:01.302 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:01.302 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:01.302 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:01.302 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:01.302 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:01.302 21:42:24 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:10:01.302 21:42:24 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:10:01.302 21:42:24 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:10:01.302 21:42:24 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:10:01.302 21:42:24 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:10:01.302 21:42:24 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:10:01.302 21:42:24 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:10:01.302 21:42:24 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:01.302 21:42:24 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:10:01.302 21:42:24 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:10:01.302 21:42:24 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:10:01.302 21:42:24 sw_hotplug -- scripts/common.sh@233 -- # local class 00:10:01.302 21:42:24 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:10:01.302 21:42:24 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:10:01.302 21:42:24 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:10:01.303 21:42:24 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:01.303 21:42:24 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:10:01.303 21:42:24 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:10:01.303 21:42:24 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:01.565 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:01.827 Waiting for block devices as requested 00:10:01.827 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:01.827 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:02.088 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:02.088 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:07.378 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:07.378 21:42:30 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:07.378 21:42:30 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:07.640 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:07.640 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:07.640 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:07.901 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:10:08.161 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:08.161 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:08.161 21:42:31 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:08.161 21:42:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:08.421 21:42:31 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:08.421 21:42:31 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:08.421 21:42:31 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=78082 00:10:08.421 21:42:31 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:08.421 21:42:31 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:08.421 21:42:31 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:08.421 21:42:31 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:08.421 21:42:31 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:08.421 21:42:31 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:08.421 21:42:31 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:08.421 21:42:31 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:08.421 21:42:31 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:10:08.421 21:42:31 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:08.421 21:42:31 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:08.421 21:42:31 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:08.421 21:42:31 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:08.421 21:42:31 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:08.421 Initializing NVMe Controllers 00:10:08.421 Attaching to 0000:00:10.0 00:10:08.421 Attaching to 0000:00:11.0 00:10:08.421 Attached to 0000:00:10.0 00:10:08.421 Attached to 0000:00:11.0 00:10:08.421 Initialization complete. Starting I/O... 00:10:08.421 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:08.421 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:10:08.421 00:10:09.811 QEMU NVMe Ctrl (12340 ): 2504 I/Os completed (+2504) 00:10:09.811 QEMU NVMe Ctrl (12341 ): 2504 I/Os completed (+2504) 00:10:09.811 00:10:10.758 QEMU NVMe Ctrl (12340 ): 5688 I/Os completed (+3184) 00:10:10.758 QEMU NVMe Ctrl (12341 ): 5689 I/Os completed (+3185) 00:10:10.758 00:10:11.754 QEMU NVMe Ctrl (12340 ): 9414 I/Os completed (+3726) 00:10:11.754 QEMU NVMe Ctrl (12341 ): 9406 I/Os completed (+3717) 00:10:11.754 00:10:12.687 QEMU NVMe Ctrl (12340 ): 13733 I/Os completed (+4319) 00:10:12.687 QEMU NVMe Ctrl (12341 ): 13740 I/Os completed (+4334) 00:10:12.687 00:10:13.624 QEMU NVMe Ctrl (12340 ): 18162 I/Os completed (+4429) 00:10:13.624 QEMU NVMe Ctrl (12341 ): 18175 I/Os completed (+4435) 00:10:13.624 00:10:14.567 21:42:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:14.567 21:42:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:14.567 21:42:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:14.567 [2024-11-27 21:42:37.341627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:14.567 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:14.567 [2024-11-27 21:42:37.343178] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.567 [2024-11-27 21:42:37.343350] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.567 [2024-11-27 21:42:37.343409] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.567 [2024-11-27 21:42:37.343535] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.567 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:14.567 [2024-11-27 21:42:37.345311] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.567 [2024-11-27 21:42:37.345464] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.567 [2024-11-27 21:42:37.345494] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.567 [2024-11-27 21:42:37.345514] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.567 21:42:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:14.567 21:42:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:14.567 [2024-11-27 21:42:37.362149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:14.567 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:14.567 [2024-11-27 21:42:37.363238] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.567 [2024-11-27 21:42:37.363324] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.567 [2024-11-27 21:42:37.363362] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.567 [2024-11-27 21:42:37.363382] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.567 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:14.567 [2024-11-27 21:42:37.364622] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.567 [2024-11-27 21:42:37.364669] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.567 [2024-11-27 21:42:37.364695] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.567 [2024-11-27 21:42:37.364713] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:14.567 21:42:37 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:14.567 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:14.567 EAL: Scan for (pci) bus failed. 00:10:14.567 21:42:37 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:14.567 21:42:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:14.567 21:42:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:14.567 21:42:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:14.567 00:10:14.567 21:42:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:14.567 21:42:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:14.567 21:42:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:14.567 21:42:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:14.567 21:42:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:14.567 Attaching to 0000:00:10.0 00:10:14.567 Attached to 0000:00:10.0 00:10:14.567 21:42:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:14.567 21:42:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:14.567 21:42:37 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:14.567 Attaching to 0000:00:11.0 00:10:14.567 Attached to 0000:00:11.0 00:10:15.511 QEMU NVMe Ctrl (12340 ): 2752 I/Os completed (+2752) 00:10:15.511 QEMU NVMe Ctrl (12341 ): 2554 I/Os completed (+2554) 00:10:15.511 00:10:16.455 QEMU NVMe Ctrl (12340 ): 5506 I/Os completed (+2754) 00:10:16.455 QEMU NVMe Ctrl (12341 ): 5370 I/Os completed (+2816) 00:10:16.455 00:10:17.400 QEMU NVMe Ctrl (12340 ): 8059 I/Os completed (+2553) 00:10:17.400 QEMU NVMe Ctrl (12341 ): 8790 I/Os completed (+3420) 00:10:17.400 00:10:18.789 QEMU NVMe Ctrl (12340 ): 11074 I/Os completed (+3015) 00:10:18.789 QEMU NVMe Ctrl (12341 ): 11867 I/Os completed (+3077) 00:10:18.789 00:10:19.732 QEMU NVMe Ctrl (12340 ): 14207 I/Os completed (+3133) 00:10:19.732 QEMU NVMe Ctrl (12341 ): 15030 I/Os completed (+3163) 00:10:19.732 00:10:20.679 QEMU NVMe Ctrl (12340 ): 17291 I/Os completed (+3084) 00:10:20.679 QEMU NVMe Ctrl (12341 ): 18121 I/Os completed (+3091) 00:10:20.679 00:10:21.626 QEMU NVMe Ctrl (12340 ): 20323 I/Os completed (+3032) 00:10:21.626 QEMU NVMe Ctrl (12341 ): 21158 I/Os completed (+3037) 00:10:21.626 00:10:22.569 QEMU NVMe Ctrl (12340 ): 23435 I/Os completed (+3112) 00:10:22.569 QEMU NVMe Ctrl (12341 ): 24270 I/Os completed (+3112) 00:10:22.569 00:10:23.504 QEMU NVMe Ctrl (12340 ): 27330 I/Os completed (+3895) 00:10:23.504 QEMU NVMe Ctrl (12341 ): 28426 I/Os completed (+4156) 00:10:23.504 00:10:24.444 QEMU NVMe Ctrl (12340 ): 31135 I/Os completed (+3805) 00:10:24.444 QEMU NVMe Ctrl (12341 ): 32722 I/Os completed (+4296) 00:10:24.444 00:10:25.830 QEMU NVMe Ctrl (12340 ): 34467 I/Os completed (+3332) 00:10:25.830 QEMU NVMe Ctrl (12341 ): 36064 I/Os completed (+3342) 00:10:25.830 00:10:26.402 QEMU NVMe Ctrl (12340 ): 37491 I/Os completed (+3024) 00:10:26.402 QEMU NVMe Ctrl (12341 ): 39116 I/Os completed (+3052) 00:10:26.402 00:10:26.662 21:42:49 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:26.663 21:42:49 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:26.663 21:42:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:26.663 21:42:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:26.663 [2024-11-27 21:42:49.642987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:26.663 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:26.663 [2024-11-27 21:42:49.644232] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:26.663 [2024-11-27 21:42:49.644290] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:26.663 [2024-11-27 21:42:49.644307] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:26.663 [2024-11-27 21:42:49.644327] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:26.663 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:26.663 [2024-11-27 21:42:49.646254] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:26.663 [2024-11-27 21:42:49.646324] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:26.663 [2024-11-27 21:42:49.646358] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:26.663 [2024-11-27 21:42:49.646375] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:26.663 21:42:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:26.663 21:42:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:26.663 [2024-11-27 21:42:49.666103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:26.663 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:26.663 [2024-11-27 21:42:49.667173] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:26.663 [2024-11-27 21:42:49.667218] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:26.663 [2024-11-27 21:42:49.667237] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:26.663 [2024-11-27 21:42:49.667250] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:26.663 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:26.663 [2024-11-27 21:42:49.668532] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:26.663 [2024-11-27 21:42:49.668571] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:26.663 [2024-11-27 21:42:49.668593] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:26.663 [2024-11-27 21:42:49.668608] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:26.663 21:42:49 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:26.663 21:42:49 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:26.663 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:26.663 EAL: Scan for (pci) bus failed. 00:10:26.921 21:42:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:26.921 21:42:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:26.921 21:42:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:26.921 21:42:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:26.921 21:42:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:26.921 21:42:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:26.921 21:42:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:26.921 21:42:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:26.921 Attaching to 0000:00:10.0 00:10:26.921 Attached to 0000:00:10.0 00:10:26.921 21:42:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:26.921 21:42:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:26.921 21:42:49 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:26.921 Attaching to 0000:00:11.0 00:10:26.921 Attached to 0000:00:11.0 00:10:27.493 QEMU NVMe Ctrl (12340 ): 1952 I/Os completed (+1952) 00:10:27.493 QEMU NVMe Ctrl (12341 ): 1818 I/Os completed (+1818) 00:10:27.493 00:10:28.435 QEMU NVMe Ctrl (12340 ): 4826 I/Os completed (+2874) 00:10:28.435 QEMU NVMe Ctrl (12341 ): 4695 I/Os completed (+2877) 00:10:28.435 00:10:29.822 QEMU NVMe Ctrl (12340 ): 8118 I/Os completed (+3292) 00:10:29.822 QEMU NVMe Ctrl (12341 ): 8128 I/Os completed (+3433) 00:10:29.822 00:10:30.766 QEMU NVMe Ctrl (12340 ): 11425 I/Os completed (+3307) 00:10:30.766 QEMU NVMe Ctrl (12341 ): 11576 I/Os completed (+3448) 00:10:30.766 00:10:31.709 QEMU NVMe Ctrl (12340 ): 15892 I/Os completed (+4467) 00:10:31.709 QEMU NVMe Ctrl (12341 ): 16047 I/Os completed (+4471) 00:10:31.709 00:10:32.652 QEMU NVMe Ctrl (12340 ): 19265 I/Os completed (+3373) 00:10:32.652 QEMU NVMe Ctrl (12341 ): 19477 I/Os completed (+3430) 00:10:32.652 00:10:33.596 QEMU NVMe Ctrl (12340 ): 23001 I/Os completed (+3736) 00:10:33.596 QEMU NVMe Ctrl (12341 ): 23209 I/Os completed (+3732) 00:10:33.596 00:10:34.539 QEMU NVMe Ctrl (12340 ): 27429 I/Os completed (+4428) 00:10:34.539 QEMU NVMe Ctrl (12341 ): 27644 I/Os completed (+4435) 00:10:34.539 00:10:35.483 QEMU NVMe Ctrl (12340 ): 30557 I/Os completed (+3128) 00:10:35.483 QEMU NVMe Ctrl (12341 ): 30818 I/Os completed (+3174) 00:10:35.483 00:10:36.426 QEMU NVMe Ctrl (12340 ): 33748 I/Os completed (+3191) 00:10:36.426 QEMU NVMe Ctrl (12341 ): 34022 I/Os completed (+3204) 00:10:36.426 00:10:37.813 QEMU NVMe Ctrl (12340 ): 37131 I/Os completed (+3383) 00:10:37.813 QEMU NVMe Ctrl (12341 ): 37406 I/Os completed (+3384) 00:10:37.813 00:10:38.758 QEMU NVMe Ctrl (12340 ): 40091 I/Os completed (+2960) 00:10:38.758 QEMU NVMe Ctrl (12341 ): 40413 I/Os completed (+3007) 00:10:38.758 00:10:39.020 21:43:01 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:39.020 21:43:01 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:39.020 21:43:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:39.020 21:43:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:39.020 [2024-11-27 21:43:01.970423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:39.020 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:39.020 [2024-11-27 21:43:01.971406] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.020 [2024-11-27 21:43:01.971546] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.020 [2024-11-27 21:43:01.971576] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.020 [2024-11-27 21:43:01.971608] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.020 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:39.020 [2024-11-27 21:43:01.973072] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.020 [2024-11-27 21:43:01.973155] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.020 [2024-11-27 21:43:01.973191] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.020 [2024-11-27 21:43:01.973214] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.020 21:43:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:39.020 21:43:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:39.020 [2024-11-27 21:43:01.989276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:39.020 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:39.020 [2024-11-27 21:43:01.990116] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.020 [2024-11-27 21:43:01.990141] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.020 [2024-11-27 21:43:01.990155] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.020 [2024-11-27 21:43:01.990166] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.020 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:39.020 [2024-11-27 21:43:01.991015] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.020 [2024-11-27 21:43:01.991040] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.020 [2024-11-27 21:43:01.991054] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.020 [2024-11-27 21:43:01.991064] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.020 21:43:01 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:39.020 21:43:01 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:39.020 21:43:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:39.020 21:43:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:39.020 21:43:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:39.281 21:43:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:39.281 21:43:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:39.281 21:43:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:39.281 21:43:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:39.281 21:43:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:39.281 Attaching to 0000:00:10.0 00:10:39.281 Attached to 0000:00:10.0 00:10:39.281 21:43:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:39.281 21:43:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:39.281 21:43:02 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:39.281 Attaching to 0000:00:11.0 00:10:39.281 Attached to 0000:00:11.0 00:10:39.281 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:39.281 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:39.281 [2024-11-27 21:43:02.280189] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:51.517 21:43:14 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:51.517 21:43:14 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:51.517 21:43:14 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.94 00:10:51.517 21:43:14 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.94 00:10:51.517 21:43:14 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:51.517 21:43:14 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.94 00:10:51.517 21:43:14 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.94 2 00:10:51.518 remove_attach_helper took 42.94s to complete (handling 2 nvme drive(s)) 21:43:14 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:58.104 21:43:20 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 78082 00:10:58.104 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (78082) - No such process 00:10:58.104 21:43:20 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 78082 00:10:58.104 21:43:20 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:58.104 21:43:20 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:58.104 21:43:20 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:58.104 21:43:20 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=78633 00:10:58.104 21:43:20 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:58.104 21:43:20 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:58.104 21:43:20 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 78633 00:10:58.104 21:43:20 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 78633 ']' 00:10:58.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.105 21:43:20 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.105 21:43:20 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:58.105 21:43:20 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.105 21:43:20 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:58.105 21:43:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:58.105 [2024-11-27 21:43:20.367678] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:10:58.105 [2024-11-27 21:43:20.367917] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78633 ] 00:10:58.105 [2024-11-27 21:43:20.513333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.105 [2024-11-27 21:43:20.554734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.105 21:43:21 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:58.105 21:43:21 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:58.105 21:43:21 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:58.105 21:43:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.105 21:43:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:58.105 21:43:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.105 21:43:21 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:58.105 21:43:21 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:58.105 21:43:21 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:58.365 21:43:21 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:58.365 21:43:21 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:58.365 21:43:21 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:58.365 21:43:21 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:58.365 21:43:21 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:58.365 21:43:21 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:58.365 21:43:21 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:58.365 21:43:21 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:58.365 21:43:21 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:58.365 21:43:21 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:04.942 21:43:27 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:04.942 21:43:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:04.942 21:43:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:04.942 21:43:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:04.942 21:43:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:04.942 21:43:27 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:04.942 21:43:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:04.942 21:43:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:04.942 21:43:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:04.942 21:43:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:04.942 21:43:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:04.942 21:43:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.942 21:43:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:04.942 21:43:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.942 21:43:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:04.942 21:43:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:04.942 [2024-11-27 21:43:27.317354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:04.942 [2024-11-27 21:43:27.318552] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.942 [2024-11-27 21:43:27.318584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.942 [2024-11-27 21:43:27.318599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.942 [2024-11-27 21:43:27.318613] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.942 [2024-11-27 21:43:27.318621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.942 [2024-11-27 21:43:27.318629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.942 [2024-11-27 21:43:27.318638] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.942 [2024-11-27 21:43:27.318645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.942 [2024-11-27 21:43:27.318652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.942 [2024-11-27 21:43:27.318659] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.942 [2024-11-27 21:43:27.318667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.942 [2024-11-27 21:43:27.318673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.942 [2024-11-27 21:43:27.717347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:04.942 [2024-11-27 21:43:27.718378] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.942 [2024-11-27 21:43:27.718409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.942 [2024-11-27 21:43:27.718419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.942 [2024-11-27 21:43:27.718430] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.942 [2024-11-27 21:43:27.718437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.942 [2024-11-27 21:43:27.718445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.942 [2024-11-27 21:43:27.718452] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.942 [2024-11-27 21:43:27.718460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.942 [2024-11-27 21:43:27.718466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.942 [2024-11-27 21:43:27.718476] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.942 [2024-11-27 21:43:27.718483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.942 [2024-11-27 21:43:27.718491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.942 21:43:27 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:04.942 21:43:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:04.942 21:43:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:04.942 21:43:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:04.942 21:43:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:04.942 21:43:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:04.942 21:43:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.942 21:43:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:04.942 21:43:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.942 21:43:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:04.942 21:43:27 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:04.942 21:43:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:04.942 21:43:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:04.942 21:43:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:04.942 21:43:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:04.942 21:43:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:04.942 21:43:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:04.942 21:43:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:04.942 21:43:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:04.942 21:43:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:05.201 21:43:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:05.201 21:43:28 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:17.397 21:43:40 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:17.397 21:43:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:17.397 21:43:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:17.397 21:43:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:17.397 21:43:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:17.397 21:43:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:17.397 21:43:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.397 21:43:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:17.397 21:43:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.397 21:43:40 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:17.397 21:43:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:17.397 21:43:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:17.397 21:43:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:17.397 [2024-11-27 21:43:40.117523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:17.397 [2024-11-27 21:43:40.119224] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.397 [2024-11-27 21:43:40.119257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.397 [2024-11-27 21:43:40.119269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.397 [2024-11-27 21:43:40.119281] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.397 [2024-11-27 21:43:40.119290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.397 [2024-11-27 21:43:40.119297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.397 [2024-11-27 21:43:40.119305] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.397 [2024-11-27 21:43:40.119311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.398 [2024-11-27 21:43:40.119319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.398 [2024-11-27 21:43:40.119325] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.398 [2024-11-27 21:43:40.119343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.398 [2024-11-27 21:43:40.119350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.398 21:43:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:17.398 21:43:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:17.398 21:43:40 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:17.398 21:43:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:17.398 21:43:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:17.398 21:43:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:17.398 21:43:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:17.398 21:43:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:17.398 21:43:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.398 21:43:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:17.398 21:43:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.398 21:43:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:17.398 21:43:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:17.656 [2024-11-27 21:43:40.517534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:17.656 [2024-11-27 21:43:40.518587] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.656 [2024-11-27 21:43:40.518619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.656 [2024-11-27 21:43:40.518629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.656 [2024-11-27 21:43:40.518641] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.656 [2024-11-27 21:43:40.518649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.656 [2024-11-27 21:43:40.518657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.656 [2024-11-27 21:43:40.518663] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.656 [2024-11-27 21:43:40.518671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.656 [2024-11-27 21:43:40.518677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.656 [2024-11-27 21:43:40.518686] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.656 [2024-11-27 21:43:40.518693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.656 [2024-11-27 21:43:40.518701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.656 21:43:40 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:17.656 21:43:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:17.656 21:43:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:17.656 21:43:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:17.656 21:43:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:17.656 21:43:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:17.656 21:43:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.656 21:43:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:17.656 21:43:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.656 21:43:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:17.656 21:43:40 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:17.656 21:43:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:17.656 21:43:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:17.656 21:43:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:17.914 21:43:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:17.914 21:43:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:17.914 21:43:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:17.914 21:43:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:17.914 21:43:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:17.914 21:43:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:17.914 21:43:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:17.914 21:43:40 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:30.113 21:43:52 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:30.113 21:43:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:30.113 21:43:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:30.113 21:43:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:30.113 21:43:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:30.113 21:43:52 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.113 21:43:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:30.113 21:43:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:30.113 21:43:52 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.113 21:43:52 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:30.113 21:43:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:30.113 21:43:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:30.113 21:43:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:30.113 21:43:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:30.113 21:43:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:30.113 21:43:53 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:30.113 21:43:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:30.113 21:43:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:30.113 21:43:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:30.113 21:43:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:30.113 21:43:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:30.113 21:43:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.113 21:43:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:30.113 [2024-11-27 21:43:53.017724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:30.113 [2024-11-27 21:43:53.018798] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.113 [2024-11-27 21:43:53.018829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.113 [2024-11-27 21:43:53.018843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.113 [2024-11-27 21:43:53.018854] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.113 [2024-11-27 21:43:53.018863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.113 [2024-11-27 21:43:53.018870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.113 [2024-11-27 21:43:53.018879] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.113 [2024-11-27 21:43:53.018885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.113 [2024-11-27 21:43:53.018893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.113 [2024-11-27 21:43:53.018899] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.113 [2024-11-27 21:43:53.018906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.113 [2024-11-27 21:43:53.018913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.113 21:43:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.113 21:43:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:30.113 21:43:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:30.679 [2024-11-27 21:43:53.517730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:30.679 [2024-11-27 21:43:53.518725] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.679 [2024-11-27 21:43:53.518756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.679 [2024-11-27 21:43:53.518766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.679 [2024-11-27 21:43:53.518778] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.679 [2024-11-27 21:43:53.518785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.679 [2024-11-27 21:43:53.518794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.679 [2024-11-27 21:43:53.518800] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.679 [2024-11-27 21:43:53.518808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.679 [2024-11-27 21:43:53.518814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.679 [2024-11-27 21:43:53.518823] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.679 [2024-11-27 21:43:53.518829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.679 [2024-11-27 21:43:53.518837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.679 21:43:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:30.679 21:43:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:30.679 21:43:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:30.679 21:43:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:30.679 21:43:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:30.679 21:43:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.679 21:43:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:30.679 21:43:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:30.679 21:43:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.679 21:43:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:30.679 21:43:53 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:30.679 21:43:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:30.679 21:43:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:30.679 21:43:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:30.679 21:43:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:30.679 21:43:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:30.679 21:43:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:30.679 21:43:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:30.679 21:43:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:30.937 21:43:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:30.937 21:43:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:30.937 21:43:53 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:43.133 21:44:05 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:43.133 21:44:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:43.133 21:44:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:43.133 21:44:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:43.133 21:44:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:43.133 21:44:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:43.133 21:44:05 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.133 21:44:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:43.133 21:44:05 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.133 21:44:05 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:43.133 21:44:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:43.133 21:44:05 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.65 00:11:43.133 21:44:05 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.65 00:11:43.133 21:44:05 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:43.133 21:44:05 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.65 00:11:43.133 21:44:05 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.65 2 00:11:43.133 remove_attach_helper took 44.65s to complete (handling 2 nvme drive(s)) 21:44:05 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:43.133 21:44:05 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.133 21:44:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:43.133 21:44:05 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.133 21:44:05 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:43.133 21:44:05 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.133 21:44:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:43.133 21:44:05 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.133 21:44:05 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:43.133 21:44:05 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:43.133 21:44:05 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:43.133 21:44:05 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:43.133 21:44:05 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:43.133 21:44:05 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:43.133 21:44:05 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:43.133 21:44:05 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:43.133 21:44:05 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:43.133 21:44:05 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:43.133 21:44:05 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:43.133 21:44:05 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:43.133 21:44:05 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:49.777 21:44:11 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:49.777 21:44:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:49.777 21:44:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:49.777 21:44:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:49.777 21:44:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:49.777 21:44:11 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:49.777 21:44:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:49.777 21:44:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:49.777 21:44:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:49.777 21:44:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:49.777 21:44:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.777 21:44:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:49.777 21:44:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:49.777 21:44:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.777 21:44:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:49.777 21:44:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:49.777 [2024-11-27 21:44:11.999740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:49.777 [2024-11-27 21:44:12.000526] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.777 [2024-11-27 21:44:12.000636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.777 [2024-11-27 21:44:12.000654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.777 [2024-11-27 21:44:12.000665] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.777 [2024-11-27 21:44:12.000674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.777 [2024-11-27 21:44:12.000681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.777 [2024-11-27 21:44:12.000689] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.777 [2024-11-27 21:44:12.000696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.777 [2024-11-27 21:44:12.000705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.777 [2024-11-27 21:44:12.000712] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.777 [2024-11-27 21:44:12.000719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.777 [2024-11-27 21:44:12.000725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.777 [2024-11-27 21:44:12.399746] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:49.777 [2024-11-27 21:44:12.400592] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.777 [2024-11-27 21:44:12.400623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.777 [2024-11-27 21:44:12.400634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.777 [2024-11-27 21:44:12.400647] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.777 [2024-11-27 21:44:12.400654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.777 [2024-11-27 21:44:12.400664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.777 [2024-11-27 21:44:12.400671] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.777 [2024-11-27 21:44:12.400680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.777 [2024-11-27 21:44:12.400687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.777 [2024-11-27 21:44:12.400696] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.777 [2024-11-27 21:44:12.400703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.777 [2024-11-27 21:44:12.400714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.777 21:44:12 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:49.777 21:44:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:49.777 21:44:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:49.777 21:44:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:49.777 21:44:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:49.777 21:44:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:49.777 21:44:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.777 21:44:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:49.777 21:44:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.777 21:44:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:49.777 21:44:12 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:49.777 21:44:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:49.777 21:44:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:49.777 21:44:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:49.777 21:44:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:49.777 21:44:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:49.777 21:44:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:49.777 21:44:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:49.777 21:44:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:49.777 21:44:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:49.777 21:44:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:49.777 21:44:12 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:01.978 21:44:24 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:01.978 21:44:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:01.978 21:44:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:01.978 21:44:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:01.978 21:44:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:01.978 21:44:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:01.978 21:44:24 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.978 21:44:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:01.978 21:44:24 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.978 21:44:24 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:01.978 21:44:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:01.978 21:44:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:01.978 21:44:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:01.978 21:44:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:01.978 21:44:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:01.978 21:44:24 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:01.978 21:44:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:01.978 21:44:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:01.978 21:44:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:01.978 21:44:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:01.978 21:44:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:01.978 21:44:24 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.978 21:44:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:01.978 21:44:24 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.978 21:44:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:01.978 21:44:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:01.978 [2024-11-27 21:44:24.899957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:01.978 [2024-11-27 21:44:24.900826] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.978 [2024-11-27 21:44:24.900857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.978 [2024-11-27 21:44:24.900871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.978 [2024-11-27 21:44:24.900883] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.978 [2024-11-27 21:44:24.900892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.978 [2024-11-27 21:44:24.900900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.978 [2024-11-27 21:44:24.900909] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.978 [2024-11-27 21:44:24.900916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.978 [2024-11-27 21:44:24.900925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.978 [2024-11-27 21:44:24.900933] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.978 [2024-11-27 21:44:24.900942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.978 [2024-11-27 21:44:24.900949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.237 [2024-11-27 21:44:25.299955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:02.237 [2024-11-27 21:44:25.300697] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.237 [2024-11-27 21:44:25.300725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.237 [2024-11-27 21:44:25.300734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.237 [2024-11-27 21:44:25.300745] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.237 [2024-11-27 21:44:25.300752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.237 [2024-11-27 21:44:25.300760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.237 [2024-11-27 21:44:25.300767] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.237 [2024-11-27 21:44:25.300774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.237 [2024-11-27 21:44:25.300780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.237 [2024-11-27 21:44:25.300788] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.237 [2024-11-27 21:44:25.300794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.237 [2024-11-27 21:44:25.300803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.495 21:44:25 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:02.495 21:44:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:02.495 21:44:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:02.495 21:44:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:02.495 21:44:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:02.495 21:44:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:02.495 21:44:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.495 21:44:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:02.495 21:44:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.495 21:44:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:02.495 21:44:25 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:02.495 21:44:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:02.495 21:44:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:02.495 21:44:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:02.495 21:44:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:02.495 21:44:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:02.495 21:44:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:02.495 21:44:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:02.495 21:44:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:02.754 21:44:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:02.754 21:44:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:02.754 21:44:25 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:14.955 21:44:37 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:14.955 21:44:37 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:14.955 21:44:37 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:14.955 21:44:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:14.955 21:44:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:14.955 21:44:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:14.955 21:44:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.955 21:44:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:14.955 21:44:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.955 21:44:37 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:14.955 21:44:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:14.955 21:44:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:14.955 21:44:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:14.955 21:44:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:14.955 21:44:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:14.955 [2024-11-27 21:44:37.700159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:14.955 [2024-11-27 21:44:37.701188] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.955 [2024-11-27 21:44:37.701215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.955 [2024-11-27 21:44:37.701228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.955 [2024-11-27 21:44:37.701240] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.955 [2024-11-27 21:44:37.701251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.955 [2024-11-27 21:44:37.701258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.955 [2024-11-27 21:44:37.701267] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.955 [2024-11-27 21:44:37.701273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.955 [2024-11-27 21:44:37.701281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.955 [2024-11-27 21:44:37.701288] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.955 [2024-11-27 21:44:37.701295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.955 [2024-11-27 21:44:37.701301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.955 21:44:37 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:14.955 21:44:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:14.955 21:44:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:14.955 21:44:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:14.955 21:44:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:14.955 21:44:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.955 21:44:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:14.955 21:44:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:14.955 21:44:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.955 21:44:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:14.955 21:44:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:15.228 [2024-11-27 21:44:38.100166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:15.228 [2024-11-27 21:44:38.101119] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:15.228 [2024-11-27 21:44:38.101159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:15.228 [2024-11-27 21:44:38.101168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:15.228 [2024-11-27 21:44:38.101179] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:15.228 [2024-11-27 21:44:38.101186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:15.228 [2024-11-27 21:44:38.101195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:15.228 [2024-11-27 21:44:38.101201] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:15.228 [2024-11-27 21:44:38.101209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:15.228 [2024-11-27 21:44:38.101216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:15.228 [2024-11-27 21:44:38.101224] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:15.228 [2024-11-27 21:44:38.101231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:15.228 [2024-11-27 21:44:38.101238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:15.228 21:44:38 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:15.228 21:44:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:15.228 21:44:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:15.228 21:44:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:15.228 21:44:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:15.228 21:44:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:15.228 21:44:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.228 21:44:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:15.228 21:44:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.228 21:44:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:15.228 21:44:38 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:15.228 21:44:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:15.228 21:44:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:15.228 21:44:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:15.487 21:44:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:15.487 21:44:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:15.487 21:44:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:15.487 21:44:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:15.487 21:44:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:15.487 21:44:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:15.487 21:44:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:15.487 21:44:38 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:27.692 21:44:50 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:27.693 21:44:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:27.693 21:44:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:27.693 21:44:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:27.693 21:44:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:27.693 21:44:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:27.693 21:44:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.693 21:44:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:27.693 21:44:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.693 21:44:50 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:27.693 21:44:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:27.693 21:44:50 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.62 00:12:27.693 21:44:50 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.62 00:12:27.693 21:44:50 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:27.693 21:44:50 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.62 00:12:27.693 21:44:50 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.62 2 00:12:27.693 remove_attach_helper took 44.62s to complete (handling 2 nvme drive(s)) 21:44:50 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:27.693 21:44:50 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 78633 00:12:27.693 21:44:50 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 78633 ']' 00:12:27.693 21:44:50 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 78633 00:12:27.693 21:44:50 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:12:27.693 21:44:50 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:27.693 21:44:50 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78633 00:12:27.693 21:44:50 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:27.693 21:44:50 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:27.693 killing process with pid 78633 00:12:27.693 21:44:50 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78633' 00:12:27.693 21:44:50 sw_hotplug -- common/autotest_common.sh@973 -- # kill 78633 00:12:27.693 21:44:50 sw_hotplug -- common/autotest_common.sh@978 -- # wait 78633 00:12:27.953 21:44:50 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:28.214 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:28.475 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:28.475 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:28.734 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:28.734 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:28.734 00:12:28.734 real 2m28.043s 00:12:28.734 user 1m48.410s 00:12:28.734 sys 0m17.981s 00:12:28.734 21:44:51 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:28.734 ************************************ 00:12:28.734 END TEST sw_hotplug 00:12:28.734 ************************************ 00:12:28.734 21:44:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:28.734 21:44:51 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:28.734 21:44:51 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:28.734 21:44:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:28.734 21:44:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:28.734 21:44:51 -- common/autotest_common.sh@10 -- # set +x 00:12:28.734 ************************************ 00:12:28.734 START TEST nvme_xnvme 00:12:28.734 ************************************ 00:12:28.734 21:44:51 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:28.998 * Looking for test storage... 00:12:28.998 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:28.998 21:44:51 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:28.998 21:44:51 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:28.998 21:44:51 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:28.998 21:44:51 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:28.998 21:44:51 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:28.998 21:44:51 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:28.998 21:44:51 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:28.998 21:44:51 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:28.998 21:44:51 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:28.998 21:44:51 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:28.998 21:44:51 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:28.998 21:44:51 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:28.998 21:44:51 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:28.998 21:44:51 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:28.998 21:44:51 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:28.998 21:44:51 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:28.998 21:44:51 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:28.998 21:44:51 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:28.998 21:44:51 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:28.998 21:44:51 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:28.998 21:44:51 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:28.998 21:44:51 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:28.998 21:44:51 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:28.998 21:44:51 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:28.998 21:44:51 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:28.998 21:44:51 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:28.998 21:44:51 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:28.998 21:44:51 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:28.998 21:44:51 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:28.998 21:44:51 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:28.998 21:44:51 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:28.998 21:44:51 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:28.998 21:44:51 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:28.998 21:44:51 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:28.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.998 --rc genhtml_branch_coverage=1 00:12:28.998 --rc genhtml_function_coverage=1 00:12:28.998 --rc genhtml_legend=1 00:12:28.998 --rc geninfo_all_blocks=1 00:12:28.998 --rc geninfo_unexecuted_blocks=1 00:12:28.998 00:12:28.998 ' 00:12:28.998 21:44:51 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:28.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.998 --rc genhtml_branch_coverage=1 00:12:28.998 --rc genhtml_function_coverage=1 00:12:28.998 --rc genhtml_legend=1 00:12:28.998 --rc geninfo_all_blocks=1 00:12:28.998 --rc geninfo_unexecuted_blocks=1 00:12:28.998 00:12:28.998 ' 00:12:28.998 21:44:51 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:28.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.998 --rc genhtml_branch_coverage=1 00:12:28.998 --rc genhtml_function_coverage=1 00:12:28.998 --rc genhtml_legend=1 00:12:28.998 --rc geninfo_all_blocks=1 00:12:28.998 --rc geninfo_unexecuted_blocks=1 00:12:28.998 00:12:28.998 ' 00:12:28.998 21:44:51 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:28.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.998 --rc genhtml_branch_coverage=1 00:12:28.998 --rc genhtml_function_coverage=1 00:12:28.998 --rc genhtml_legend=1 00:12:28.998 --rc geninfo_all_blocks=1 00:12:28.998 --rc geninfo_unexecuted_blocks=1 00:12:28.999 00:12:28.999 ' 00:12:28.999 21:44:51 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:12:28.999 21:44:51 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:12:28.999 21:44:51 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:28.999 21:44:51 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:12:28.999 21:44:51 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:28.999 21:44:51 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:28.999 21:44:51 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:28.999 21:44:51 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:12:28.999 21:44:51 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:12:28.999 21:44:51 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:28.999 21:44:51 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:28.999 21:44:51 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:28.999 21:44:51 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:28.999 21:44:51 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:12:28.999 21:44:51 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:12:28.999 21:44:51 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:12:28.999 21:44:51 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:12:28.999 21:44:51 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:12:28.999 21:44:51 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:12:28.999 21:44:51 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:28.999 21:44:51 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:28.999 21:44:51 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:28.999 21:44:51 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:28.999 21:44:51 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:28.999 21:44:51 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:28.999 21:44:51 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:12:28.999 21:44:51 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:28.999 #define SPDK_CONFIG_H 00:12:28.999 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:28.999 #define SPDK_CONFIG_APPS 1 00:12:28.999 #define SPDK_CONFIG_ARCH native 00:12:28.999 #define SPDK_CONFIG_ASAN 1 00:12:28.999 #undef SPDK_CONFIG_AVAHI 00:12:28.999 #undef SPDK_CONFIG_CET 00:12:28.999 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:28.999 #define SPDK_CONFIG_COVERAGE 1 00:12:28.999 #define SPDK_CONFIG_CROSS_PREFIX 00:12:28.999 #undef SPDK_CONFIG_CRYPTO 00:12:28.999 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:28.999 #undef SPDK_CONFIG_CUSTOMOCF 00:12:28.999 #undef SPDK_CONFIG_DAOS 00:12:28.999 #define SPDK_CONFIG_DAOS_DIR 00:12:28.999 #define SPDK_CONFIG_DEBUG 1 00:12:28.999 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:28.999 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:12:28.999 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:12:28.999 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:12:29.000 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:29.000 #undef SPDK_CONFIG_DPDK_UADK 00:12:29.000 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:29.000 #define SPDK_CONFIG_EXAMPLES 1 00:12:29.000 #undef SPDK_CONFIG_FC 00:12:29.000 #define SPDK_CONFIG_FC_PATH 00:12:29.000 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:29.000 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:29.000 #define SPDK_CONFIG_FSDEV 1 00:12:29.000 #undef SPDK_CONFIG_FUSE 00:12:29.000 #undef SPDK_CONFIG_FUZZER 00:12:29.000 #define SPDK_CONFIG_FUZZER_LIB 00:12:29.000 #undef SPDK_CONFIG_GOLANG 00:12:29.000 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:29.000 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:29.000 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:29.000 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:29.000 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:29.000 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:29.000 #undef SPDK_CONFIG_HAVE_LZ4 00:12:29.000 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:29.000 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:29.000 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:29.000 #define SPDK_CONFIG_IDXD 1 00:12:29.000 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:29.000 #undef SPDK_CONFIG_IPSEC_MB 00:12:29.000 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:29.000 #define SPDK_CONFIG_ISAL 1 00:12:29.000 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:29.000 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:29.000 #define SPDK_CONFIG_LIBDIR 00:12:29.000 #undef SPDK_CONFIG_LTO 00:12:29.000 #define SPDK_CONFIG_MAX_LCORES 128 00:12:29.000 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:29.000 #define SPDK_CONFIG_NVME_CUSE 1 00:12:29.000 #undef SPDK_CONFIG_OCF 00:12:29.000 #define SPDK_CONFIG_OCF_PATH 00:12:29.000 #define SPDK_CONFIG_OPENSSL_PATH 00:12:29.000 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:29.000 #define SPDK_CONFIG_PGO_DIR 00:12:29.000 #undef SPDK_CONFIG_PGO_USE 00:12:29.000 #define SPDK_CONFIG_PREFIX /usr/local 00:12:29.000 #undef SPDK_CONFIG_RAID5F 00:12:29.000 #undef SPDK_CONFIG_RBD 00:12:29.000 #define SPDK_CONFIG_RDMA 1 00:12:29.000 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:29.000 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:29.000 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:29.000 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:29.000 #define SPDK_CONFIG_SHARED 1 00:12:29.000 #undef SPDK_CONFIG_SMA 00:12:29.000 #define SPDK_CONFIG_TESTS 1 00:12:29.000 #undef SPDK_CONFIG_TSAN 00:12:29.000 #define SPDK_CONFIG_UBLK 1 00:12:29.000 #define SPDK_CONFIG_UBSAN 1 00:12:29.000 #undef SPDK_CONFIG_UNIT_TESTS 00:12:29.000 #undef SPDK_CONFIG_URING 00:12:29.000 #define SPDK_CONFIG_URING_PATH 00:12:29.000 #undef SPDK_CONFIG_URING_ZNS 00:12:29.000 #undef SPDK_CONFIG_USDT 00:12:29.000 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:29.000 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:29.000 #undef SPDK_CONFIG_VFIO_USER 00:12:29.000 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:29.000 #define SPDK_CONFIG_VHOST 1 00:12:29.000 #define SPDK_CONFIG_VIRTIO 1 00:12:29.000 #undef SPDK_CONFIG_VTUNE 00:12:29.000 #define SPDK_CONFIG_VTUNE_DIR 00:12:29.000 #define SPDK_CONFIG_WERROR 1 00:12:29.000 #define SPDK_CONFIG_WPDK_DIR 00:12:29.000 #define SPDK_CONFIG_XNVME 1 00:12:29.000 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:29.000 21:44:51 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:29.000 21:44:51 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:29.000 21:44:51 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:29.000 21:44:51 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.000 21:44:51 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.000 21:44:51 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.000 21:44:51 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.000 21:44:51 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.000 21:44:51 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.000 21:44:51 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:29.000 21:44:51 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.000 21:44:51 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:29.000 21:44:51 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:29.000 21:44:51 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:29.000 21:44:52 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:29.000 21:44:52 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:12:29.000 21:44:52 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:12:29.000 21:44:52 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:12:29.000 21:44:52 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:12:29.000 21:44:52 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:12:29.000 21:44:52 nvme_xnvme -- pm/common@68 -- # uname -s 00:12:29.000 21:44:52 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:12:29.000 21:44:52 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:29.000 21:44:52 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:29.000 21:44:52 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:29.000 21:44:52 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:29.000 21:44:52 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:29.000 21:44:52 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:29.000 21:44:52 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:12:29.000 21:44:52 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:29.000 21:44:52 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:29.000 21:44:52 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:29.000 21:44:52 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:29.000 21:44:52 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:12:29.000 21:44:52 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:12:29.000 21:44:52 nvme_xnvme -- common/autotest_common.sh@58 -- # : 1 00:12:29.000 21:44:52 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:29.000 21:44:52 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:12:29.000 21:44:52 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:29.000 21:44:52 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:12:29.000 21:44:52 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:29.000 21:44:52 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:12:29.000 21:44:52 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:29.000 21:44:52 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:12:29.000 21:44:52 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:29.000 21:44:52 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:12:29.000 21:44:52 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:29.000 21:44:52 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:12:29.000 21:44:52 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:29.000 21:44:52 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:12:29.000 21:44:52 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:29.000 21:44:52 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:12:29.000 21:44:52 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:29.000 21:44:52 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@126 -- # : /home/vagrant/spdk_repo/dpdk/build 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@140 -- # : v22.11.4 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:29.001 21:44:52 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 79971 ]] 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 79971 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.6QixT4 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.6QixT4/tests/xnvme /tmp/spdk.6QixT4 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13379907584 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=6202331136 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261964800 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=3428352 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13379907584 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=6202331136 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:12:29.002 21:44:52 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265245696 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265397248 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=151552 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=98511687680 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=1191092224 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:29.003 * Looking for test storage... 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13379907584 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:29.003 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:29.003 21:44:52 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:29.265 21:44:52 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:29.265 21:44:52 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:29.265 21:44:52 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:29.265 21:44:52 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:29.265 21:44:52 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:29.265 21:44:52 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:29.265 21:44:52 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:29.265 21:44:52 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:29.265 21:44:52 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:29.265 21:44:52 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:29.265 21:44:52 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:29.265 21:44:52 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:29.265 21:44:52 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:29.265 21:44:52 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:29.265 21:44:52 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:29.265 21:44:52 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:29.265 21:44:52 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:29.265 21:44:52 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:29.265 21:44:52 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:29.265 21:44:52 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:29.265 21:44:52 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:29.265 21:44:52 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:29.265 21:44:52 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:29.265 21:44:52 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:29.265 21:44:52 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:29.265 21:44:52 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:29.265 21:44:52 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:29.265 21:44:52 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:29.265 21:44:52 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:29.265 21:44:52 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:29.265 21:44:52 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:29.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.265 --rc genhtml_branch_coverage=1 00:12:29.265 --rc genhtml_function_coverage=1 00:12:29.265 --rc genhtml_legend=1 00:12:29.265 --rc geninfo_all_blocks=1 00:12:29.265 --rc geninfo_unexecuted_blocks=1 00:12:29.265 00:12:29.265 ' 00:12:29.265 21:44:52 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:29.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.265 --rc genhtml_branch_coverage=1 00:12:29.265 --rc genhtml_function_coverage=1 00:12:29.265 --rc genhtml_legend=1 00:12:29.265 --rc geninfo_all_blocks=1 00:12:29.265 --rc geninfo_unexecuted_blocks=1 00:12:29.265 00:12:29.265 ' 00:12:29.265 21:44:52 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:29.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.265 --rc genhtml_branch_coverage=1 00:12:29.265 --rc genhtml_function_coverage=1 00:12:29.265 --rc genhtml_legend=1 00:12:29.265 --rc geninfo_all_blocks=1 00:12:29.265 --rc geninfo_unexecuted_blocks=1 00:12:29.265 00:12:29.265 ' 00:12:29.265 21:44:52 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:29.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.265 --rc genhtml_branch_coverage=1 00:12:29.265 --rc genhtml_function_coverage=1 00:12:29.265 --rc genhtml_legend=1 00:12:29.265 --rc geninfo_all_blocks=1 00:12:29.265 --rc geninfo_unexecuted_blocks=1 00:12:29.265 00:12:29.265 ' 00:12:29.265 21:44:52 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:29.265 21:44:52 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:29.265 21:44:52 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.265 21:44:52 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.265 21:44:52 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.265 21:44:52 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.265 21:44:52 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.265 21:44:52 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.265 21:44:52 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:29.265 21:44:52 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.265 21:44:52 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:12:29.265 21:44:52 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:12:29.265 21:44:52 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:12:29.265 21:44:52 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:12:29.265 21:44:52 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:12:29.265 21:44:52 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:12:29.265 21:44:52 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:12:29.265 21:44:52 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:12:29.265 21:44:52 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:12:29.265 21:44:52 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:12:29.265 21:44:52 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:12:29.265 21:44:52 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:12:29.265 21:44:52 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:12:29.265 21:44:52 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:12:29.265 21:44:52 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:12:29.265 21:44:52 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:12:29.265 21:44:52 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:12:29.265 21:44:52 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:12:29.265 21:44:52 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:12:29.265 21:44:52 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:12:29.265 21:44:52 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:12:29.265 21:44:52 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:29.526 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:29.526 Waiting for block devices as requested 00:12:29.788 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:29.788 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:29.788 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:30.048 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:35.337 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:35.337 21:44:57 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:12:35.337 21:44:58 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:12:35.337 21:44:58 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:12:35.598 21:44:58 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:12:35.598 21:44:58 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:12:35.598 21:44:58 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:12:35.598 21:44:58 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:12:35.598 21:44:58 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:12:35.598 No valid GPT data, bailing 00:12:35.598 21:44:58 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:12:35.598 21:44:58 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:12:35.598 21:44:58 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:12:35.598 21:44:58 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:12:35.598 21:44:58 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:12:35.598 21:44:58 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:12:35.598 21:44:58 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:12:35.598 21:44:58 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:12:35.598 21:44:58 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:12:35.598 21:44:58 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:35.598 21:44:58 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:12:35.598 21:44:58 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:12:35.598 21:44:58 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:12:35.598 21:44:58 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:35.598 21:44:58 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:12:35.598 21:44:58 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:12:35.598 21:44:58 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:35.598 21:44:58 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:35.598 21:44:58 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:35.598 21:44:58 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:35.598 ************************************ 00:12:35.598 START TEST xnvme_rpc 00:12:35.598 ************************************ 00:12:35.598 21:44:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:35.598 21:44:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:35.598 21:44:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:35.598 21:44:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:35.598 21:44:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:35.598 21:44:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=80362 00:12:35.598 21:44:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 80362 00:12:35.598 21:44:58 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:35.598 21:44:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 80362 ']' 00:12:35.598 21:44:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.598 21:44:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:35.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.598 21:44:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.598 21:44:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:35.598 21:44:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.598 [2024-11-27 21:44:58.664024] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:12:35.598 [2024-11-27 21:44:58.664169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80362 ] 00:12:35.860 [2024-11-27 21:44:58.810166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.860 [2024-11-27 21:44:58.839977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.433 21:44:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:36.433 21:44:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:36.433 21:44:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:12:36.433 21:44:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.433 21:44:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.433 xnvme_bdev 00:12:36.433 21:44:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.694 21:44:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:36.694 21:44:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:36.694 21:44:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:36.694 21:44:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.694 21:44:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.694 21:44:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.694 21:44:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:36.694 21:44:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:36.694 21:44:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:36.694 21:44:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.694 21:44:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:36.694 21:44:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.694 21:44:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.694 21:44:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:36.694 21:44:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:36.694 21:44:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:36.694 21:44:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.694 21:44:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.694 21:44:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:36.694 21:44:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.694 21:44:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:36.694 21:44:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:36.694 21:44:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:36.694 21:44:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.694 21:44:59 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:36.694 21:44:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.694 21:44:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.694 21:44:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:36.694 21:44:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:36.694 21:44:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.694 21:44:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.694 21:44:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.694 21:44:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 80362 00:12:36.694 21:44:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 80362 ']' 00:12:36.694 21:44:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 80362 00:12:36.694 21:44:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:36.694 21:44:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:36.694 21:44:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80362 00:12:36.694 21:44:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:36.694 killing process with pid 80362 00:12:36.694 21:44:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:36.694 21:44:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80362' 00:12:36.694 21:44:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 80362 00:12:36.694 21:44:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 80362 00:12:36.955 00:12:36.955 real 0m1.440s 00:12:36.955 user 0m1.506s 00:12:36.955 sys 0m0.410s 00:12:36.955 21:45:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:36.955 21:45:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.955 ************************************ 00:12:36.955 END TEST xnvme_rpc 00:12:36.955 ************************************ 00:12:36.955 21:45:00 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:36.955 21:45:00 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:36.955 21:45:00 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:36.955 21:45:00 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:37.216 ************************************ 00:12:37.216 START TEST xnvme_bdevperf 00:12:37.216 ************************************ 00:12:37.216 21:45:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:37.216 21:45:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:37.216 21:45:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:37.216 21:45:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:37.216 21:45:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:37.216 21:45:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:37.216 21:45:00 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:37.216 21:45:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:37.216 { 00:12:37.216 "subsystems": [ 00:12:37.216 { 00:12:37.216 "subsystem": "bdev", 00:12:37.216 "config": [ 00:12:37.216 { 00:12:37.216 "params": { 00:12:37.216 "io_mechanism": "libaio", 00:12:37.216 "conserve_cpu": false, 00:12:37.216 "filename": "/dev/nvme0n1", 00:12:37.216 "name": "xnvme_bdev" 00:12:37.216 }, 00:12:37.216 "method": "bdev_xnvme_create" 00:12:37.216 }, 00:12:37.216 { 00:12:37.216 "method": "bdev_wait_for_examine" 00:12:37.216 } 00:12:37.216 ] 00:12:37.216 } 00:12:37.216 ] 00:12:37.216 } 00:12:37.216 [2024-11-27 21:45:00.155734] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:12:37.216 [2024-11-27 21:45:00.155854] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80425 ] 00:12:37.216 [2024-11-27 21:45:00.302890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.216 [2024-11-27 21:45:00.332134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.475 Running I/O for 5 seconds... 00:12:39.418 29654.00 IOPS, 115.84 MiB/s [2024-11-27T21:45:03.484Z] 28175.00 IOPS, 110.06 MiB/s [2024-11-27T21:45:04.871Z] 28063.00 IOPS, 109.62 MiB/s [2024-11-27T21:45:05.441Z] 27315.50 IOPS, 106.70 MiB/s 00:12:42.320 Latency(us) 00:12:42.320 [2024-11-27T21:45:05.441Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:42.320 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:42.320 xnvme_bdev : 5.00 26654.44 104.12 0.00 0.00 2396.08 392.27 9225.45 00:12:42.320 [2024-11-27T21:45:05.441Z] =================================================================================================================== 00:12:42.320 [2024-11-27T21:45:05.441Z] Total : 26654.44 104.12 0.00 0.00 2396.08 392.27 9225.45 00:12:42.580 21:45:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:42.580 21:45:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:42.580 21:45:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:42.580 21:45:05 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:42.580 21:45:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:42.580 { 00:12:42.580 "subsystems": [ 00:12:42.580 { 00:12:42.580 "subsystem": "bdev", 00:12:42.580 "config": [ 00:12:42.580 { 00:12:42.580 "params": { 00:12:42.580 "io_mechanism": "libaio", 00:12:42.580 "conserve_cpu": false, 00:12:42.580 "filename": "/dev/nvme0n1", 00:12:42.580 "name": "xnvme_bdev" 00:12:42.580 }, 00:12:42.580 "method": "bdev_xnvme_create" 00:12:42.580 }, 00:12:42.580 { 00:12:42.580 "method": "bdev_wait_for_examine" 00:12:42.580 } 00:12:42.580 ] 00:12:42.580 } 00:12:42.580 ] 00:12:42.580 } 00:12:42.841 [2024-11-27 21:45:05.700521] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:12:42.841 [2024-11-27 21:45:05.700860] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80490 ] 00:12:42.841 [2024-11-27 21:45:05.848414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.841 [2024-11-27 21:45:05.877419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.103 Running I/O for 5 seconds... 00:12:44.990 29393.00 IOPS, 114.82 MiB/s [2024-11-27T21:45:09.054Z] 29152.00 IOPS, 113.88 MiB/s [2024-11-27T21:45:09.992Z] 31301.00 IOPS, 122.27 MiB/s [2024-11-27T21:45:11.377Z] 32087.00 IOPS, 125.34 MiB/s [2024-11-27T21:45:11.377Z] 31983.40 IOPS, 124.94 MiB/s 00:12:48.256 Latency(us) 00:12:48.256 [2024-11-27T21:45:11.377Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:48.256 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:48.256 xnvme_bdev : 5.01 31948.15 124.80 0.00 0.00 1998.65 460.01 6906.49 00:12:48.256 [2024-11-27T21:45:11.377Z] =================================================================================================================== 00:12:48.256 [2024-11-27T21:45:11.377Z] Total : 31948.15 124.80 0.00 0.00 1998.65 460.01 6906.49 00:12:48.256 ************************************ 00:12:48.256 END TEST xnvme_bdevperf 00:12:48.256 ************************************ 00:12:48.256 00:12:48.256 real 0m11.118s 00:12:48.256 user 0m3.431s 00:12:48.256 sys 0m6.189s 00:12:48.256 21:45:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:48.256 21:45:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:48.256 21:45:11 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:48.256 21:45:11 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:48.256 21:45:11 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:48.256 21:45:11 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:48.256 ************************************ 00:12:48.256 START TEST xnvme_fio_plugin 00:12:48.256 ************************************ 00:12:48.256 21:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:48.256 21:45:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:48.256 21:45:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:48.256 21:45:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:48.256 21:45:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:48.256 21:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:48.256 21:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:48.256 21:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:48.256 21:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:48.256 21:45:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:48.256 21:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:48.256 21:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:48.256 21:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:48.256 21:45:11 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:48.256 21:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:48.256 21:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:48.257 21:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:48.257 21:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:48.257 21:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:48.257 21:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:48.257 21:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:48.257 21:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:48.257 21:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:48.257 21:45:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:48.257 { 00:12:48.257 "subsystems": [ 00:12:48.257 { 00:12:48.257 "subsystem": "bdev", 00:12:48.257 "config": [ 00:12:48.257 { 00:12:48.257 "params": { 00:12:48.257 "io_mechanism": "libaio", 00:12:48.257 "conserve_cpu": false, 00:12:48.257 "filename": "/dev/nvme0n1", 00:12:48.257 "name": "xnvme_bdev" 00:12:48.257 }, 00:12:48.257 "method": "bdev_xnvme_create" 00:12:48.257 }, 00:12:48.257 { 00:12:48.257 "method": "bdev_wait_for_examine" 00:12:48.257 } 00:12:48.257 ] 00:12:48.257 } 00:12:48.257 ] 00:12:48.257 } 00:12:48.517 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:48.517 fio-3.35 00:12:48.517 Starting 1 thread 00:12:53.828 00:12:53.828 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=80595: Wed Nov 27 21:45:16 2024 00:12:53.828 read: IOPS=31.4k, BW=123MiB/s (128MB/s)(613MiB/5001msec) 00:12:53.828 slat (usec): min=4, max=1991, avg=23.18, stdev=100.91 00:12:53.828 clat (usec): min=71, max=11808, avg=1411.25, stdev=557.33 00:12:53.828 lat (usec): min=207, max=11812, avg=1434.43, stdev=547.31 00:12:53.828 clat percentiles (usec): 00:12:53.828 | 1.00th=[ 289], 5.00th=[ 545], 10.00th=[ 709], 20.00th=[ 930], 00:12:53.828 | 30.00th=[ 1106], 40.00th=[ 1254], 50.00th=[ 1401], 60.00th=[ 1532], 00:12:53.828 | 70.00th=[ 1680], 80.00th=[ 1844], 90.00th=[ 2089], 95.00th=[ 2343], 00:12:53.828 | 99.00th=[ 2999], 99.50th=[ 3228], 99.90th=[ 3654], 99.95th=[ 3884], 00:12:53.828 | 99.99th=[ 4359] 00:12:53.828 bw ( KiB/s): min=116240, max=130648, per=100.00%, avg=125676.44, stdev=4695.66, samples=9 00:12:53.828 iops : min=29060, max=32662, avg=31419.11, stdev=1173.92, samples=9 00:12:53.828 lat (usec) : 100=0.01%, 250=0.59%, 500=3.58%, 750=7.50%, 1000=11.96% 00:12:53.828 lat (msec) : 2=63.29%, 4=13.05%, 10=0.03%, 20=0.01% 00:12:53.828 cpu : usr=41.36%, sys=50.02%, ctx=22, majf=0, minf=1065 00:12:53.828 IO depths : 1=0.4%, 2=1.1%, 4=2.9%, 8=8.1%, 16=22.9%, 32=62.4%, >=64=2.1% 00:12:53.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:53.828 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:12:53.828 issued rwts: total=156853,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:53.828 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:53.828 00:12:53.828 Run status group 0 (all jobs): 00:12:53.828 READ: bw=123MiB/s (128MB/s), 123MiB/s-123MiB/s (128MB/s-128MB/s), io=613MiB (642MB), run=5001-5001msec 00:12:54.402 ----------------------------------------------------- 00:12:54.402 Suppressions used: 00:12:54.402 count bytes template 00:12:54.402 1 11 /usr/src/fio/parse.c 00:12:54.402 1 8 libtcmalloc_minimal.so 00:12:54.402 1 904 libcrypto.so 00:12:54.402 ----------------------------------------------------- 00:12:54.402 00:12:54.402 21:45:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:54.402 21:45:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:54.402 21:45:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:54.402 21:45:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:54.402 21:45:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:54.402 21:45:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:54.402 21:45:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:54.402 21:45:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:54.402 21:45:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:54.402 21:45:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:54.402 21:45:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:54.402 21:45:17 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:54.402 21:45:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:54.402 21:45:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:54.402 21:45:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:54.402 21:45:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:54.402 21:45:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:54.402 21:45:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:54.402 21:45:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:54.402 21:45:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:54.402 21:45:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:54.402 { 00:12:54.402 "subsystems": [ 00:12:54.402 { 00:12:54.402 "subsystem": "bdev", 00:12:54.402 "config": [ 00:12:54.402 { 00:12:54.402 "params": { 00:12:54.402 "io_mechanism": "libaio", 00:12:54.402 "conserve_cpu": false, 00:12:54.402 "filename": "/dev/nvme0n1", 00:12:54.402 "name": "xnvme_bdev" 00:12:54.402 }, 00:12:54.402 "method": "bdev_xnvme_create" 00:12:54.402 }, 00:12:54.402 { 00:12:54.402 "method": "bdev_wait_for_examine" 00:12:54.402 } 00:12:54.402 ] 00:12:54.402 } 00:12:54.402 ] 00:12:54.402 } 00:12:54.402 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:54.402 fio-3.35 00:12:54.402 Starting 1 thread 00:13:00.990 00:13:00.990 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=80681: Wed Nov 27 21:45:22 2024 00:13:00.990 write: IOPS=34.8k, BW=136MiB/s (143MB/s)(680MiB/5001msec); 0 zone resets 00:13:00.990 slat (usec): min=4, max=1789, avg=22.12, stdev=87.23 00:13:00.990 clat (usec): min=77, max=7422, avg=1236.65, stdev=546.00 00:13:00.990 lat (usec): min=162, max=7426, avg=1258.78, stdev=539.74 00:13:00.990 clat percentiles (usec): 00:13:00.990 | 1.00th=[ 265], 5.00th=[ 445], 10.00th=[ 586], 20.00th=[ 775], 00:13:00.990 | 30.00th=[ 922], 40.00th=[ 1057], 50.00th=[ 1188], 60.00th=[ 1319], 00:13:00.990 | 70.00th=[ 1483], 80.00th=[ 1663], 90.00th=[ 1926], 95.00th=[ 2180], 00:13:00.990 | 99.00th=[ 2868], 99.50th=[ 3130], 99.90th=[ 3851], 99.95th=[ 4080], 00:13:00.990 | 99.99th=[ 4948] 00:13:00.990 bw ( KiB/s): min=129752, max=149944, per=99.22%, avg=138234.67, stdev=6759.50, samples=9 00:13:00.990 iops : min=32438, max=37486, avg=34558.67, stdev=1689.88, samples=9 00:13:00.990 lat (usec) : 100=0.01%, 250=0.85%, 500=5.73%, 750=12.05%, 1000=17.46% 00:13:00.990 lat (msec) : 2=55.74%, 4=8.11%, 10=0.07% 00:13:00.990 cpu : usr=37.68%, sys=51.92%, ctx=16, majf=0, minf=1066 00:13:00.990 IO depths : 1=0.3%, 2=0.9%, 4=2.6%, 8=7.8%, 16=23.1%, 32=63.2%, >=64=2.1% 00:13:00.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:00.990 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.7%, >=64=0.0% 00:13:00.990 issued rwts: total=0,174178,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:00.990 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:00.990 00:13:00.990 Run status group 0 (all jobs): 00:13:00.990 WRITE: bw=136MiB/s (143MB/s), 136MiB/s-136MiB/s (143MB/s-143MB/s), io=680MiB (713MB), run=5001-5001msec 00:13:00.990 ----------------------------------------------------- 00:13:00.990 Suppressions used: 00:13:00.990 count bytes template 00:13:00.990 1 11 /usr/src/fio/parse.c 00:13:00.990 1 8 libtcmalloc_minimal.so 00:13:00.990 1 904 libcrypto.so 00:13:00.990 ----------------------------------------------------- 00:13:00.990 00:13:00.990 ************************************ 00:13:00.990 END TEST xnvme_fio_plugin 00:13:00.990 ************************************ 00:13:00.990 00:13:00.990 real 0m12.074s 00:13:00.990 user 0m5.062s 00:13:00.990 sys 0m5.673s 00:13:00.990 21:45:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:00.990 21:45:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:00.990 21:45:23 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:00.990 21:45:23 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:00.990 21:45:23 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:00.991 21:45:23 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:00.991 21:45:23 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:00.991 21:45:23 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:00.991 21:45:23 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:00.991 ************************************ 00:13:00.991 START TEST xnvme_rpc 00:13:00.991 ************************************ 00:13:00.991 21:45:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:00.991 21:45:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:00.991 21:45:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:00.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.991 21:45:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:00.991 21:45:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:00.991 21:45:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=80762 00:13:00.991 21:45:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 80762 00:13:00.991 21:45:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 80762 ']' 00:13:00.991 21:45:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.991 21:45:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:00.991 21:45:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.991 21:45:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:00.991 21:45:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:00.991 21:45:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.991 [2024-11-27 21:45:23.492608] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:13:00.991 [2024-11-27 21:45:23.492766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80762 ] 00:13:00.991 [2024-11-27 21:45:23.640388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.991 [2024-11-27 21:45:23.669482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.252 21:45:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:01.252 21:45:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:01.252 21:45:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:13:01.252 21:45:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.252 21:45:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.252 xnvme_bdev 00:13:01.252 21:45:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.252 21:45:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:01.252 21:45:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:01.252 21:45:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.252 21:45:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.252 21:45:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:01.513 21:45:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.513 21:45:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:01.513 21:45:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:01.513 21:45:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:01.513 21:45:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.513 21:45:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.513 21:45:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:01.513 21:45:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.513 21:45:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:01.513 21:45:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:01.513 21:45:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:01.513 21:45:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.513 21:45:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:01.513 21:45:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.513 21:45:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.513 21:45:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:13:01.513 21:45:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:01.513 21:45:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:01.513 21:45:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:01.513 21:45:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.514 21:45:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.514 21:45:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.514 21:45:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:01.514 21:45:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:01.514 21:45:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.514 21:45:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.514 21:45:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.514 21:45:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 80762 00:13:01.514 21:45:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 80762 ']' 00:13:01.514 21:45:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 80762 00:13:01.514 21:45:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:01.514 21:45:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:01.514 21:45:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80762 00:13:01.514 21:45:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:01.514 21:45:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:01.514 21:45:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80762' 00:13:01.514 killing process with pid 80762 00:13:01.514 21:45:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 80762 00:13:01.514 21:45:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 80762 00:13:01.776 ************************************ 00:13:01.776 END TEST xnvme_rpc 00:13:01.776 ************************************ 00:13:01.776 00:13:01.776 real 0m1.457s 00:13:01.776 user 0m1.503s 00:13:01.776 sys 0m0.420s 00:13:01.776 21:45:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:01.776 21:45:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.038 21:45:24 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:02.038 21:45:24 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:02.038 21:45:24 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:02.038 21:45:24 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:02.038 ************************************ 00:13:02.038 START TEST xnvme_bdevperf 00:13:02.038 ************************************ 00:13:02.038 21:45:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:02.038 21:45:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:02.038 21:45:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:13:02.038 21:45:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:02.038 21:45:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:02.038 21:45:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:02.038 21:45:24 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:02.038 21:45:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:02.038 { 00:13:02.038 "subsystems": [ 00:13:02.038 { 00:13:02.038 "subsystem": "bdev", 00:13:02.038 "config": [ 00:13:02.038 { 00:13:02.038 "params": { 00:13:02.038 "io_mechanism": "libaio", 00:13:02.038 "conserve_cpu": true, 00:13:02.038 "filename": "/dev/nvme0n1", 00:13:02.038 "name": "xnvme_bdev" 00:13:02.038 }, 00:13:02.038 "method": "bdev_xnvme_create" 00:13:02.038 }, 00:13:02.038 { 00:13:02.038 "method": "bdev_wait_for_examine" 00:13:02.038 } 00:13:02.038 ] 00:13:02.038 } 00:13:02.038 ] 00:13:02.038 } 00:13:02.038 [2024-11-27 21:45:25.000754] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:13:02.038 [2024-11-27 21:45:25.000881] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80818 ] 00:13:02.038 [2024-11-27 21:45:25.147884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.301 [2024-11-27 21:45:25.177260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.301 Running I/O for 5 seconds... 00:13:04.193 31838.00 IOPS, 124.37 MiB/s [2024-11-27T21:45:28.702Z] 30506.50 IOPS, 119.17 MiB/s [2024-11-27T21:45:29.644Z] 30292.67 IOPS, 118.33 MiB/s [2024-11-27T21:45:30.586Z] 30462.00 IOPS, 118.99 MiB/s [2024-11-27T21:45:30.586Z] 31405.60 IOPS, 122.68 MiB/s 00:13:07.466 Latency(us) 00:13:07.466 [2024-11-27T21:45:30.587Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:07.466 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:07.466 xnvme_bdev : 5.00 31389.23 122.61 0.00 0.00 2034.39 431.66 13913.80 00:13:07.466 [2024-11-27T21:45:30.587Z] =================================================================================================================== 00:13:07.466 [2024-11-27T21:45:30.587Z] Total : 31389.23 122.61 0.00 0.00 2034.39 431.66 13913.80 00:13:07.466 21:45:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:07.466 21:45:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:07.466 21:45:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:07.466 21:45:30 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:07.466 21:45:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:07.466 { 00:13:07.466 "subsystems": [ 00:13:07.466 { 00:13:07.466 "subsystem": "bdev", 00:13:07.466 "config": [ 00:13:07.466 { 00:13:07.466 "params": { 00:13:07.466 "io_mechanism": "libaio", 00:13:07.466 "conserve_cpu": true, 00:13:07.466 "filename": "/dev/nvme0n1", 00:13:07.466 "name": "xnvme_bdev" 00:13:07.466 }, 00:13:07.466 "method": "bdev_xnvme_create" 00:13:07.466 }, 00:13:07.466 { 00:13:07.466 "method": "bdev_wait_for_examine" 00:13:07.466 } 00:13:07.466 ] 00:13:07.466 } 00:13:07.466 ] 00:13:07.466 } 00:13:07.466 [2024-11-27 21:45:30.558897] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:13:07.466 [2024-11-27 21:45:30.559045] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80883 ] 00:13:07.736 [2024-11-27 21:45:30.704943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.736 [2024-11-27 21:45:30.733796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.736 Running I/O for 5 seconds... 00:13:10.067 34663.00 IOPS, 135.40 MiB/s [2024-11-27T21:45:34.131Z] 35729.00 IOPS, 139.57 MiB/s [2024-11-27T21:45:35.074Z] 35793.00 IOPS, 139.82 MiB/s [2024-11-27T21:45:36.018Z] 35390.75 IOPS, 138.25 MiB/s [2024-11-27T21:45:36.018Z] 35007.20 IOPS, 136.75 MiB/s 00:13:12.897 Latency(us) 00:13:12.897 [2024-11-27T21:45:36.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:12.897 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:12.897 xnvme_bdev : 5.00 34976.87 136.63 0.00 0.00 1824.94 453.71 6175.51 00:13:12.897 [2024-11-27T21:45:36.018Z] =================================================================================================================== 00:13:12.897 [2024-11-27T21:45:36.019Z] Total : 34976.87 136.63 0.00 0.00 1824.94 453.71 6175.51 00:13:13.159 00:13:13.159 real 0m11.113s 00:13:13.159 user 0m3.233s 00:13:13.159 sys 0m6.216s 00:13:13.159 ************************************ 00:13:13.159 END TEST xnvme_bdevperf 00:13:13.159 ************************************ 00:13:13.159 21:45:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:13.159 21:45:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:13.159 21:45:36 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:13.159 21:45:36 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:13.159 21:45:36 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:13.159 21:45:36 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:13.159 ************************************ 00:13:13.159 START TEST xnvme_fio_plugin 00:13:13.159 ************************************ 00:13:13.159 21:45:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:13.159 21:45:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:13.159 21:45:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:13:13.159 21:45:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:13.159 21:45:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:13.159 21:45:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:13.159 21:45:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:13.159 21:45:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:13.159 21:45:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:13.159 21:45:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:13.159 21:45:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:13.159 21:45:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:13.160 21:45:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:13.160 21:45:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:13.160 21:45:36 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:13.160 21:45:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:13.160 21:45:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:13.160 21:45:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:13.160 21:45:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:13.160 21:45:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:13.160 21:45:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:13.160 21:45:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:13.160 21:45:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:13.160 21:45:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:13.160 { 00:13:13.160 "subsystems": [ 00:13:13.160 { 00:13:13.160 "subsystem": "bdev", 00:13:13.160 "config": [ 00:13:13.160 { 00:13:13.160 "params": { 00:13:13.160 "io_mechanism": "libaio", 00:13:13.160 "conserve_cpu": true, 00:13:13.160 "filename": "/dev/nvme0n1", 00:13:13.160 "name": "xnvme_bdev" 00:13:13.160 }, 00:13:13.160 "method": "bdev_xnvme_create" 00:13:13.160 }, 00:13:13.160 { 00:13:13.160 "method": "bdev_wait_for_examine" 00:13:13.160 } 00:13:13.160 ] 00:13:13.160 } 00:13:13.160 ] 00:13:13.160 } 00:13:13.421 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:13.421 fio-3.35 00:13:13.421 Starting 1 thread 00:13:18.782 00:13:18.782 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=80986: Wed Nov 27 21:45:41 2024 00:13:18.782 read: IOPS=32.6k, BW=127MiB/s (134MB/s)(638MiB/5001msec) 00:13:18.782 slat (usec): min=4, max=2023, avg=23.01, stdev=95.41 00:13:18.782 clat (usec): min=107, max=6163, avg=1337.82, stdev=547.39 00:13:18.782 lat (usec): min=189, max=6262, avg=1360.82, stdev=539.03 00:13:18.782 clat percentiles (usec): 00:13:18.782 | 1.00th=[ 269], 5.00th=[ 502], 10.00th=[ 652], 20.00th=[ 881], 00:13:18.782 | 30.00th=[ 1057], 40.00th=[ 1188], 50.00th=[ 1303], 60.00th=[ 1434], 00:13:18.782 | 70.00th=[ 1582], 80.00th=[ 1745], 90.00th=[ 2008], 95.00th=[ 2245], 00:13:18.782 | 99.00th=[ 2999], 99.50th=[ 3294], 99.90th=[ 3884], 99.95th=[ 4080], 00:13:18.782 | 99.99th=[ 4359] 00:13:18.782 bw ( KiB/s): min=121840, max=138792, per=100.00%, avg=130832.89, stdev=6286.15, samples=9 00:13:18.782 iops : min=30460, max=34698, avg=32708.22, stdev=1571.54, samples=9 00:13:18.782 lat (usec) : 250=0.78%, 500=4.17%, 750=8.97%, 1000=12.39% 00:13:18.782 lat (msec) : 2=63.71%, 4=9.91%, 10=0.07% 00:13:18.782 cpu : usr=39.56%, sys=51.82%, ctx=22, majf=0, minf=1065 00:13:18.782 IO depths : 1=0.4%, 2=1.1%, 4=3.0%, 8=8.4%, 16=23.5%, 32=61.5%, >=64=2.1% 00:13:18.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:18.782 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.7%, >=64=0.0% 00:13:18.782 issued rwts: total=163200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:18.782 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:18.782 00:13:18.782 Run status group 0 (all jobs): 00:13:18.782 READ: bw=127MiB/s (134MB/s), 127MiB/s-127MiB/s (134MB/s-134MB/s), io=638MiB (668MB), run=5001-5001msec 00:13:19.043 ----------------------------------------------------- 00:13:19.043 Suppressions used: 00:13:19.043 count bytes template 00:13:19.043 1 11 /usr/src/fio/parse.c 00:13:19.043 1 8 libtcmalloc_minimal.so 00:13:19.043 1 904 libcrypto.so 00:13:19.043 ----------------------------------------------------- 00:13:19.043 00:13:19.043 21:45:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:19.303 21:45:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:19.304 21:45:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:19.304 21:45:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:19.304 21:45:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:19.304 21:45:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:19.304 21:45:42 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:19.304 21:45:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:19.304 21:45:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:19.304 21:45:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:19.304 21:45:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:19.304 21:45:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:19.304 21:45:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:19.304 21:45:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:19.304 21:45:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:19.304 21:45:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:19.304 21:45:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:19.304 21:45:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:19.304 21:45:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:19.304 21:45:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:19.304 21:45:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:19.304 { 00:13:19.304 "subsystems": [ 00:13:19.304 { 00:13:19.304 "subsystem": "bdev", 00:13:19.304 "config": [ 00:13:19.304 { 00:13:19.304 "params": { 00:13:19.304 "io_mechanism": "libaio", 00:13:19.304 "conserve_cpu": true, 00:13:19.304 "filename": "/dev/nvme0n1", 00:13:19.304 "name": "xnvme_bdev" 00:13:19.304 }, 00:13:19.304 "method": "bdev_xnvme_create" 00:13:19.304 }, 00:13:19.304 { 00:13:19.304 "method": "bdev_wait_for_examine" 00:13:19.304 } 00:13:19.304 ] 00:13:19.304 } 00:13:19.304 ] 00:13:19.304 } 00:13:19.304 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:19.304 fio-3.35 00:13:19.304 Starting 1 thread 00:13:25.897 00:13:25.897 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81072: Wed Nov 27 21:45:47 2024 00:13:25.897 write: IOPS=34.6k, BW=135MiB/s (142MB/s)(677MiB/5001msec); 0 zone resets 00:13:25.897 slat (usec): min=4, max=1942, avg=21.97, stdev=86.73 00:13:25.897 clat (usec): min=108, max=6054, avg=1250.71, stdev=551.77 00:13:25.897 lat (usec): min=190, max=6138, avg=1272.68, stdev=545.51 00:13:25.897 clat percentiles (usec): 00:13:25.897 | 1.00th=[ 265], 5.00th=[ 424], 10.00th=[ 578], 20.00th=[ 775], 00:13:25.897 | 30.00th=[ 930], 40.00th=[ 1074], 50.00th=[ 1205], 60.00th=[ 1352], 00:13:25.897 | 70.00th=[ 1500], 80.00th=[ 1680], 90.00th=[ 1958], 95.00th=[ 2212], 00:13:25.897 | 99.00th=[ 2802], 99.50th=[ 3097], 99.90th=[ 3654], 99.95th=[ 3884], 00:13:25.897 | 99.99th=[ 4293] 00:13:25.897 bw ( KiB/s): min=128600, max=152544, per=100.00%, avg=140432.00, stdev=8224.34, samples=9 00:13:25.897 iops : min=32150, max=38136, avg=35108.00, stdev=2056.08, samples=9 00:13:25.897 lat (usec) : 250=0.82%, 500=6.38%, 750=11.46%, 1000=16.21% 00:13:25.897 lat (msec) : 2=56.37%, 4=8.71%, 10=0.04% 00:13:25.897 cpu : usr=37.72%, sys=52.92%, ctx=20, majf=0, minf=1066 00:13:25.897 IO depths : 1=0.4%, 2=1.0%, 4=3.0%, 8=8.7%, 16=23.8%, 32=61.1%, >=64=2.0% 00:13:25.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.897 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:13:25.897 issued rwts: total=0,173269,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:25.897 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:25.897 00:13:25.897 Run status group 0 (all jobs): 00:13:25.897 WRITE: bw=135MiB/s (142MB/s), 135MiB/s-135MiB/s (142MB/s-142MB/s), io=677MiB (710MB), run=5001-5001msec 00:13:25.897 ----------------------------------------------------- 00:13:25.897 Suppressions used: 00:13:25.897 count bytes template 00:13:25.897 1 11 /usr/src/fio/parse.c 00:13:25.897 1 8 libtcmalloc_minimal.so 00:13:25.897 1 904 libcrypto.so 00:13:25.897 ----------------------------------------------------- 00:13:25.897 00:13:25.897 00:13:25.897 real 0m12.081s 00:13:25.897 user 0m4.973s 00:13:25.897 sys 0m5.831s 00:13:25.897 21:45:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:25.897 ************************************ 00:13:25.897 END TEST xnvme_fio_plugin 00:13:25.897 ************************************ 00:13:25.897 21:45:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:25.897 21:45:48 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:25.897 21:45:48 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:25.897 21:45:48 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:13:25.897 21:45:48 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:13:25.897 21:45:48 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:25.897 21:45:48 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:25.897 21:45:48 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:25.897 21:45:48 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:25.897 21:45:48 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:25.897 21:45:48 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:25.897 21:45:48 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:25.897 21:45:48 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:25.897 ************************************ 00:13:25.897 START TEST xnvme_rpc 00:13:25.897 ************************************ 00:13:25.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.897 21:45:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:25.897 21:45:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:25.897 21:45:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:25.897 21:45:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:25.897 21:45:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:25.897 21:45:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=81153 00:13:25.897 21:45:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 81153 00:13:25.897 21:45:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 81153 ']' 00:13:25.897 21:45:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.897 21:45:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:25.897 21:45:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.897 21:45:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:25.897 21:45:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:25.897 21:45:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.897 [2024-11-27 21:45:48.350411] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:13:25.897 [2024-11-27 21:45:48.350772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81153 ] 00:13:25.897 [2024-11-27 21:45:48.498597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.897 [2024-11-27 21:45:48.527703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.159 21:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:26.159 21:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:26.159 21:45:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:13:26.159 21:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.159 21:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.159 xnvme_bdev 00:13:26.159 21:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.159 21:45:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:26.159 21:45:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:26.159 21:45:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:26.159 21:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.159 21:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.159 21:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.159 21:45:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:26.159 21:45:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:26.159 21:45:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:26.159 21:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.159 21:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.159 21:45:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:26.159 21:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.420 21:45:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:26.420 21:45:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:26.420 21:45:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:26.420 21:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.420 21:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.420 21:45:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:26.420 21:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.420 21:45:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:26.420 21:45:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:26.420 21:45:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:26.420 21:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.420 21:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.420 21:45:49 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:26.420 21:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.420 21:45:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:26.421 21:45:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:26.421 21:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.421 21:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.421 21:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.421 21:45:49 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 81153 00:13:26.421 21:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 81153 ']' 00:13:26.421 21:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 81153 00:13:26.421 21:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:26.421 21:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:26.421 21:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81153 00:13:26.421 killing process with pid 81153 00:13:26.421 21:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:26.421 21:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:26.421 21:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81153' 00:13:26.421 21:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 81153 00:13:26.421 21:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 81153 00:13:26.682 ************************************ 00:13:26.683 END TEST xnvme_rpc 00:13:26.683 ************************************ 00:13:26.683 00:13:26.683 real 0m1.410s 00:13:26.683 user 0m1.533s 00:13:26.683 sys 0m0.365s 00:13:26.683 21:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:26.683 21:45:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.683 21:45:49 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:26.683 21:45:49 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:26.683 21:45:49 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:26.683 21:45:49 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:26.683 ************************************ 00:13:26.683 START TEST xnvme_bdevperf 00:13:26.683 ************************************ 00:13:26.683 21:45:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:26.683 21:45:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:26.683 21:45:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:26.683 21:45:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:26.683 21:45:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:26.683 21:45:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:26.683 21:45:49 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:26.683 21:45:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:26.683 { 00:13:26.683 "subsystems": [ 00:13:26.683 { 00:13:26.683 "subsystem": "bdev", 00:13:26.683 "config": [ 00:13:26.683 { 00:13:26.683 "params": { 00:13:26.683 "io_mechanism": "io_uring", 00:13:26.683 "conserve_cpu": false, 00:13:26.683 "filename": "/dev/nvme0n1", 00:13:26.683 "name": "xnvme_bdev" 00:13:26.683 }, 00:13:26.683 "method": "bdev_xnvme_create" 00:13:26.683 }, 00:13:26.683 { 00:13:26.683 "method": "bdev_wait_for_examine" 00:13:26.683 } 00:13:26.683 ] 00:13:26.683 } 00:13:26.683 ] 00:13:26.683 } 00:13:26.945 [2024-11-27 21:45:49.808998] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:13:26.945 [2024-11-27 21:45:49.809413] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81205 ] 00:13:26.945 [2024-11-27 21:45:49.956120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.945 [2024-11-27 21:45:49.986451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.205 Running I/O for 5 seconds... 00:13:29.088 32780.00 IOPS, 128.05 MiB/s [2024-11-27T21:45:53.152Z] 32897.50 IOPS, 128.51 MiB/s [2024-11-27T21:45:54.542Z] 33968.67 IOPS, 132.69 MiB/s [2024-11-27T21:45:55.116Z] 34635.50 IOPS, 135.29 MiB/s [2024-11-27T21:45:55.116Z] 34514.40 IOPS, 134.82 MiB/s 00:13:31.995 Latency(us) 00:13:31.995 [2024-11-27T21:45:55.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:31.995 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:31.995 xnvme_bdev : 5.00 34512.99 134.82 0.00 0.00 1850.69 381.24 6654.42 00:13:31.995 [2024-11-27T21:45:55.116Z] =================================================================================================================== 00:13:31.995 [2024-11-27T21:45:55.116Z] Total : 34512.99 134.82 0.00 0.00 1850.69 381.24 6654.42 00:13:32.256 21:45:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:32.256 21:45:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:32.256 21:45:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:32.257 21:45:55 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:32.257 21:45:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:32.257 { 00:13:32.257 "subsystems": [ 00:13:32.257 { 00:13:32.257 "subsystem": "bdev", 00:13:32.257 "config": [ 00:13:32.257 { 00:13:32.257 "params": { 00:13:32.257 "io_mechanism": "io_uring", 00:13:32.257 "conserve_cpu": false, 00:13:32.257 "filename": "/dev/nvme0n1", 00:13:32.257 "name": "xnvme_bdev" 00:13:32.257 }, 00:13:32.257 "method": "bdev_xnvme_create" 00:13:32.257 }, 00:13:32.257 { 00:13:32.257 "method": "bdev_wait_for_examine" 00:13:32.257 } 00:13:32.257 ] 00:13:32.257 } 00:13:32.257 ] 00:13:32.257 } 00:13:32.257 [2024-11-27 21:45:55.352646] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:13:32.257 [2024-11-27 21:45:55.353098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81275 ] 00:13:32.518 [2024-11-27 21:45:55.500019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.518 [2024-11-27 21:45:55.528896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.518 Running I/O for 5 seconds... 00:13:34.853 35690.00 IOPS, 139.41 MiB/s [2024-11-27T21:45:58.918Z] 35027.00 IOPS, 136.82 MiB/s [2024-11-27T21:45:59.862Z] 35255.67 IOPS, 137.72 MiB/s [2024-11-27T21:46:00.807Z] 35372.25 IOPS, 138.17 MiB/s [2024-11-27T21:46:00.807Z] 34986.80 IOPS, 136.67 MiB/s 00:13:37.686 Latency(us) 00:13:37.686 [2024-11-27T21:46:00.807Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.686 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:37.686 xnvme_bdev : 5.00 34983.76 136.66 0.00 0.00 1825.42 401.72 6099.89 00:13:37.686 [2024-11-27T21:46:00.807Z] =================================================================================================================== 00:13:37.686 [2024-11-27T21:46:00.807Z] Total : 34983.76 136.66 0.00 0.00 1825.42 401.72 6099.89 00:13:37.948 00:13:37.948 real 0m11.075s 00:13:37.948 user 0m4.618s 00:13:37.948 sys 0m6.202s 00:13:37.948 ************************************ 00:13:37.948 END TEST xnvme_bdevperf 00:13:37.948 ************************************ 00:13:37.948 21:46:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:37.948 21:46:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:37.948 21:46:00 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:37.948 21:46:00 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:37.948 21:46:00 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:37.948 21:46:00 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:37.948 ************************************ 00:13:37.948 START TEST xnvme_fio_plugin 00:13:37.948 ************************************ 00:13:37.948 21:46:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:37.948 21:46:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:37.948 21:46:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:37.948 21:46:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:37.948 21:46:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:37.948 21:46:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:37.948 21:46:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:37.948 21:46:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:37.948 21:46:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:37.948 21:46:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:37.948 21:46:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:37.948 21:46:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:37.948 21:46:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:37.948 21:46:00 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:37.948 21:46:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:37.948 21:46:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:37.948 21:46:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:37.948 21:46:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:37.948 21:46:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:37.948 21:46:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:37.948 21:46:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:37.948 21:46:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:37.948 21:46:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:37.948 21:46:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:37.948 { 00:13:37.948 "subsystems": [ 00:13:37.948 { 00:13:37.948 "subsystem": "bdev", 00:13:37.948 "config": [ 00:13:37.948 { 00:13:37.948 "params": { 00:13:37.948 "io_mechanism": "io_uring", 00:13:37.948 "conserve_cpu": false, 00:13:37.948 "filename": "/dev/nvme0n1", 00:13:37.948 "name": "xnvme_bdev" 00:13:37.948 }, 00:13:37.948 "method": "bdev_xnvme_create" 00:13:37.948 }, 00:13:37.948 { 00:13:37.948 "method": "bdev_wait_for_examine" 00:13:37.948 } 00:13:37.948 ] 00:13:37.948 } 00:13:37.948 ] 00:13:37.948 } 00:13:38.210 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:38.210 fio-3.35 00:13:38.210 Starting 1 thread 00:13:43.501 00:13:43.501 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81378: Wed Nov 27 21:46:06 2024 00:13:43.501 read: IOPS=33.2k, BW=130MiB/s (136MB/s)(649MiB/5001msec) 00:13:43.501 slat (usec): min=2, max=152, avg= 3.50, stdev= 1.92 00:13:43.501 clat (usec): min=966, max=4854, avg=1785.78, stdev=339.97 00:13:43.501 lat (usec): min=969, max=4866, avg=1789.28, stdev=340.37 00:13:43.501 clat percentiles (usec): 00:13:43.501 | 1.00th=[ 1188], 5.00th=[ 1303], 10.00th=[ 1385], 20.00th=[ 1500], 00:13:43.501 | 30.00th=[ 1582], 40.00th=[ 1680], 50.00th=[ 1745], 60.00th=[ 1827], 00:13:43.501 | 70.00th=[ 1926], 80.00th=[ 2057], 90.00th=[ 2245], 95.00th=[ 2376], 00:13:43.501 | 99.00th=[ 2737], 99.50th=[ 2868], 99.90th=[ 3163], 99.95th=[ 3458], 00:13:43.501 | 99.99th=[ 4817] 00:13:43.501 bw ( KiB/s): min=123392, max=146432, per=98.25%, avg=130560.00, stdev=6660.92, samples=9 00:13:43.501 iops : min=30848, max=36608, avg=32640.00, stdev=1665.23, samples=9 00:13:43.501 lat (usec) : 1000=0.01% 00:13:43.501 lat (msec) : 2=76.47%, 4=23.49%, 10=0.04% 00:13:43.501 cpu : usr=32.14%, sys=66.70%, ctx=13, majf=0, minf=1063 00:13:43.501 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:43.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.501 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:13:43.501 issued rwts: total=166144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:43.501 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:43.501 00:13:43.501 Run status group 0 (all jobs): 00:13:43.501 READ: bw=130MiB/s (136MB/s), 130MiB/s-130MiB/s (136MB/s-136MB/s), io=649MiB (681MB), run=5001-5001msec 00:13:43.763 ----------------------------------------------------- 00:13:43.763 Suppressions used: 00:13:43.763 count bytes template 00:13:43.763 1 11 /usr/src/fio/parse.c 00:13:43.763 1 8 libtcmalloc_minimal.so 00:13:43.763 1 904 libcrypto.so 00:13:43.763 ----------------------------------------------------- 00:13:43.763 00:13:44.025 21:46:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:44.025 21:46:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:44.025 21:46:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:44.025 21:46:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:44.025 21:46:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:44.025 21:46:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:44.025 21:46:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:44.025 21:46:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:44.025 21:46:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:44.025 21:46:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:44.025 21:46:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:44.025 21:46:06 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:44.025 21:46:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:44.025 21:46:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:44.025 21:46:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:44.025 21:46:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:44.025 21:46:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:44.025 21:46:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:44.025 21:46:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:44.025 21:46:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:44.025 21:46:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:44.025 { 00:13:44.025 "subsystems": [ 00:13:44.025 { 00:13:44.025 "subsystem": "bdev", 00:13:44.025 "config": [ 00:13:44.025 { 00:13:44.025 "params": { 00:13:44.025 "io_mechanism": "io_uring", 00:13:44.025 "conserve_cpu": false, 00:13:44.025 "filename": "/dev/nvme0n1", 00:13:44.025 "name": "xnvme_bdev" 00:13:44.025 }, 00:13:44.025 "method": "bdev_xnvme_create" 00:13:44.025 }, 00:13:44.025 { 00:13:44.025 "method": "bdev_wait_for_examine" 00:13:44.025 } 00:13:44.025 ] 00:13:44.025 } 00:13:44.025 ] 00:13:44.025 } 00:13:44.025 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:44.025 fio-3.35 00:13:44.025 Starting 1 thread 00:13:50.615 00:13:50.615 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81458: Wed Nov 27 21:46:12 2024 00:13:50.615 write: IOPS=34.1k, BW=133MiB/s (140MB/s)(666MiB/5001msec); 0 zone resets 00:13:50.615 slat (nsec): min=2921, max=59155, avg=3673.75, stdev=1746.04 00:13:50.615 clat (usec): min=160, max=8989, avg=1729.19, stdev=298.39 00:13:50.615 lat (usec): min=173, max=8992, avg=1732.86, stdev=298.70 00:13:50.615 clat percentiles (usec): 00:13:50.615 | 1.00th=[ 1188], 5.00th=[ 1303], 10.00th=[ 1385], 20.00th=[ 1483], 00:13:50.615 | 30.00th=[ 1565], 40.00th=[ 1631], 50.00th=[ 1696], 60.00th=[ 1778], 00:13:50.615 | 70.00th=[ 1844], 80.00th=[ 1942], 90.00th=[ 2114], 95.00th=[ 2278], 00:13:50.615 | 99.00th=[ 2606], 99.50th=[ 2769], 99.90th=[ 3228], 99.95th=[ 3556], 00:13:50.615 | 99.99th=[ 3884] 00:13:50.615 bw ( KiB/s): min=128510, max=155136, per=100.00%, avg=136845.11, stdev=7902.83, samples=9 00:13:50.615 iops : min=32127, max=38784, avg=34211.22, stdev=1975.77, samples=9 00:13:50.615 lat (usec) : 250=0.01%, 1000=0.01% 00:13:50.615 lat (msec) : 2=84.43%, 4=15.56%, 10=0.01% 00:13:50.615 cpu : usr=33.08%, sys=65.78%, ctx=12, majf=0, minf=1064 00:13:50.615 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:13:50.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.615 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:50.615 issued rwts: total=0,170615,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.615 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:50.615 00:13:50.615 Run status group 0 (all jobs): 00:13:50.615 WRITE: bw=133MiB/s (140MB/s), 133MiB/s-133MiB/s (140MB/s-140MB/s), io=666MiB (699MB), run=5001-5001msec 00:13:50.615 ----------------------------------------------------- 00:13:50.615 Suppressions used: 00:13:50.615 count bytes template 00:13:50.615 1 11 /usr/src/fio/parse.c 00:13:50.615 1 8 libtcmalloc_minimal.so 00:13:50.615 1 904 libcrypto.so 00:13:50.615 ----------------------------------------------------- 00:13:50.615 00:13:50.615 00:13:50.615 real 0m12.007s 00:13:50.615 user 0m4.415s 00:13:50.615 sys 0m7.161s 00:13:50.615 21:46:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:50.615 21:46:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:50.615 ************************************ 00:13:50.615 END TEST xnvme_fio_plugin 00:13:50.615 ************************************ 00:13:50.615 21:46:12 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:50.615 21:46:12 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:50.615 21:46:12 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:50.615 21:46:12 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:50.615 21:46:12 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:50.615 21:46:12 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:50.615 21:46:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:50.615 ************************************ 00:13:50.615 START TEST xnvme_rpc 00:13:50.615 ************************************ 00:13:50.615 21:46:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:50.615 21:46:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:50.615 21:46:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:50.615 21:46:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:50.615 21:46:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:50.615 21:46:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=81549 00:13:50.615 21:46:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 81549 00:13:50.615 21:46:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 81549 ']' 00:13:50.615 21:46:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.615 21:46:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:50.615 21:46:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:50.615 21:46:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.615 21:46:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:50.615 21:46:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.615 [2024-11-27 21:46:13.042822] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:13:50.615 [2024-11-27 21:46:13.043198] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81549 ] 00:13:50.615 [2024-11-27 21:46:13.191750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.615 [2024-11-27 21:46:13.220444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.876 21:46:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:50.876 21:46:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:50.876 21:46:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:13:50.876 21:46:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.876 21:46:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.876 xnvme_bdev 00:13:50.876 21:46:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.876 21:46:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:50.876 21:46:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:50.876 21:46:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:50.876 21:46:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.876 21:46:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.876 21:46:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.876 21:46:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:50.876 21:46:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:50.876 21:46:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:50.876 21:46:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.876 21:46:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.876 21:46:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:50.876 21:46:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.876 21:46:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:50.876 21:46:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:50.876 21:46:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:50.876 21:46:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:50.876 21:46:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.876 21:46:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.876 21:46:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.136 21:46:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:51.136 21:46:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:51.136 21:46:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:51.136 21:46:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:51.136 21:46:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.136 21:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.136 21:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.136 21:46:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:51.136 21:46:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:51.136 21:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.136 21:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.136 21:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.136 21:46:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 81549 00:13:51.136 21:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 81549 ']' 00:13:51.136 21:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 81549 00:13:51.136 21:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:51.136 21:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:51.136 21:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81549 00:13:51.136 killing process with pid 81549 00:13:51.136 21:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:51.136 21:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:51.136 21:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81549' 00:13:51.136 21:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 81549 00:13:51.136 21:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 81549 00:13:51.396 00:13:51.396 real 0m1.425s 00:13:51.396 user 0m1.478s 00:13:51.396 sys 0m0.405s 00:13:51.396 ************************************ 00:13:51.396 END TEST xnvme_rpc 00:13:51.396 ************************************ 00:13:51.396 21:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:51.396 21:46:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.396 21:46:14 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:51.396 21:46:14 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:51.396 21:46:14 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:51.396 21:46:14 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:51.396 ************************************ 00:13:51.396 START TEST xnvme_bdevperf 00:13:51.396 ************************************ 00:13:51.396 21:46:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:51.396 21:46:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:51.396 21:46:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:51.396 21:46:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:51.396 21:46:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:51.396 21:46:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:51.396 21:46:14 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:51.396 21:46:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:51.396 { 00:13:51.396 "subsystems": [ 00:13:51.396 { 00:13:51.396 "subsystem": "bdev", 00:13:51.396 "config": [ 00:13:51.396 { 00:13:51.396 "params": { 00:13:51.396 "io_mechanism": "io_uring", 00:13:51.396 "conserve_cpu": true, 00:13:51.396 "filename": "/dev/nvme0n1", 00:13:51.396 "name": "xnvme_bdev" 00:13:51.396 }, 00:13:51.396 "method": "bdev_xnvme_create" 00:13:51.396 }, 00:13:51.396 { 00:13:51.396 "method": "bdev_wait_for_examine" 00:13:51.396 } 00:13:51.396 ] 00:13:51.396 } 00:13:51.396 ] 00:13:51.396 } 00:13:51.657 [2024-11-27 21:46:14.526810] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:13:51.657 [2024-11-27 21:46:14.526944] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81601 ] 00:13:51.657 [2024-11-27 21:46:14.675027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.657 [2024-11-27 21:46:14.703895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.919 Running I/O for 5 seconds... 00:13:53.896 33353.00 IOPS, 130.29 MiB/s [2024-11-27T21:46:17.958Z] 33160.00 IOPS, 129.53 MiB/s [2024-11-27T21:46:18.903Z] 34052.67 IOPS, 133.02 MiB/s [2024-11-27T21:46:19.846Z] 34621.00 IOPS, 135.24 MiB/s [2024-11-27T21:46:19.846Z] 34361.20 IOPS, 134.22 MiB/s 00:13:56.725 Latency(us) 00:13:56.725 [2024-11-27T21:46:19.846Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:56.725 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:56.725 xnvme_bdev : 5.00 34348.79 134.17 0.00 0.00 1859.51 730.98 9628.75 00:13:56.725 [2024-11-27T21:46:19.846Z] =================================================================================================================== 00:13:56.725 [2024-11-27T21:46:19.846Z] Total : 34348.79 134.17 0.00 0.00 1859.51 730.98 9628.75 00:13:56.986 21:46:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:56.986 21:46:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:56.986 21:46:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:56.986 21:46:20 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:56.986 21:46:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:56.986 { 00:13:56.986 "subsystems": [ 00:13:56.986 { 00:13:56.986 "subsystem": "bdev", 00:13:56.986 "config": [ 00:13:56.986 { 00:13:56.986 "params": { 00:13:56.986 "io_mechanism": "io_uring", 00:13:56.986 "conserve_cpu": true, 00:13:56.986 "filename": "/dev/nvme0n1", 00:13:56.986 "name": "xnvme_bdev" 00:13:56.986 }, 00:13:56.986 "method": "bdev_xnvme_create" 00:13:56.986 }, 00:13:56.986 { 00:13:56.986 "method": "bdev_wait_for_examine" 00:13:56.986 } 00:13:56.986 ] 00:13:56.986 } 00:13:56.986 ] 00:13:56.986 } 00:13:56.986 [2024-11-27 21:46:20.066623] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:13:56.986 [2024-11-27 21:46:20.066759] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81671 ] 00:13:57.247 [2024-11-27 21:46:20.205449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.247 [2024-11-27 21:46:20.234744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.247 Running I/O for 5 seconds... 00:13:59.579 34346.00 IOPS, 134.16 MiB/s [2024-11-27T21:46:23.646Z] 34513.50 IOPS, 134.82 MiB/s [2024-11-27T21:46:24.590Z] 34525.00 IOPS, 134.86 MiB/s [2024-11-27T21:46:25.536Z] 34576.00 IOPS, 135.06 MiB/s [2024-11-27T21:46:25.536Z] 34569.00 IOPS, 135.04 MiB/s 00:14:02.415 Latency(us) 00:14:02.415 [2024-11-27T21:46:25.536Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:02.415 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:02.415 xnvme_bdev : 5.00 34564.49 135.02 0.00 0.00 1847.63 428.50 9175.04 00:14:02.415 [2024-11-27T21:46:25.536Z] =================================================================================================================== 00:14:02.415 [2024-11-27T21:46:25.536Z] Total : 34564.49 135.02 0.00 0.00 1847.63 428.50 9175.04 00:14:02.415 00:14:02.415 real 0m11.063s 00:14:02.415 user 0m7.322s 00:14:02.415 sys 0m3.208s 00:14:02.415 21:46:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:02.415 21:46:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:02.415 ************************************ 00:14:02.415 END TEST xnvme_bdevperf 00:14:02.415 ************************************ 00:14:02.677 21:46:25 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:02.677 21:46:25 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:02.677 21:46:25 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:02.677 21:46:25 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:02.677 ************************************ 00:14:02.677 START TEST xnvme_fio_plugin 00:14:02.677 ************************************ 00:14:02.677 21:46:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:02.677 21:46:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:02.677 21:46:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:14:02.677 21:46:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:02.677 21:46:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:02.677 21:46:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:02.677 21:46:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:02.677 21:46:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:02.677 21:46:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:02.677 21:46:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:02.677 21:46:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:02.677 21:46:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:02.677 21:46:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:02.677 21:46:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:02.677 21:46:25 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:02.677 21:46:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:02.677 21:46:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:02.677 21:46:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:02.677 21:46:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:02.677 21:46:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:02.677 21:46:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:02.677 21:46:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:02.677 21:46:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:02.677 21:46:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:02.677 { 00:14:02.677 "subsystems": [ 00:14:02.677 { 00:14:02.677 "subsystem": "bdev", 00:14:02.677 "config": [ 00:14:02.677 { 00:14:02.677 "params": { 00:14:02.677 "io_mechanism": "io_uring", 00:14:02.677 "conserve_cpu": true, 00:14:02.677 "filename": "/dev/nvme0n1", 00:14:02.677 "name": "xnvme_bdev" 00:14:02.677 }, 00:14:02.677 "method": "bdev_xnvme_create" 00:14:02.677 }, 00:14:02.677 { 00:14:02.677 "method": "bdev_wait_for_examine" 00:14:02.677 } 00:14:02.677 ] 00:14:02.677 } 00:14:02.677 ] 00:14:02.677 } 00:14:02.677 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:02.677 fio-3.35 00:14:02.678 Starting 1 thread 00:14:09.300 00:14:09.300 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81779: Wed Nov 27 21:46:31 2024 00:14:09.300 read: IOPS=32.8k, BW=128MiB/s (134MB/s)(641MiB/5002msec) 00:14:09.300 slat (usec): min=2, max=230, avg= 3.46, stdev= 2.02 00:14:09.300 clat (usec): min=1075, max=4776, avg=1809.24, stdev=271.89 00:14:09.300 lat (usec): min=1078, max=4783, avg=1812.70, stdev=272.08 00:14:09.300 clat percentiles (usec): 00:14:09.300 | 1.00th=[ 1319], 5.00th=[ 1434], 10.00th=[ 1500], 20.00th=[ 1582], 00:14:09.300 | 30.00th=[ 1647], 40.00th=[ 1713], 50.00th=[ 1778], 60.00th=[ 1844], 00:14:09.300 | 70.00th=[ 1926], 80.00th=[ 2024], 90.00th=[ 2147], 95.00th=[ 2278], 00:14:09.300 | 99.00th=[ 2540], 99.50th=[ 2638], 99.90th=[ 3228], 99.95th=[ 3982], 00:14:09.300 | 99.99th=[ 4686] 00:14:09.300 bw ( KiB/s): min=129532, max=133632, per=100.00%, avg=131441.33, stdev=1815.74, samples=9 00:14:09.300 iops : min=32383, max=33408, avg=32860.33, stdev=453.94, samples=9 00:14:09.300 lat (msec) : 2=78.39%, 4=21.56%, 10=0.05% 00:14:09.300 cpu : usr=60.69%, sys=35.11%, ctx=16, majf=0, minf=1063 00:14:09.300 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:09.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:09.300 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:09.300 issued rwts: total=164145,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:09.300 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:09.300 00:14:09.300 Run status group 0 (all jobs): 00:14:09.300 READ: bw=128MiB/s (134MB/s), 128MiB/s-128MiB/s (134MB/s-134MB/s), io=641MiB (672MB), run=5002-5002msec 00:14:09.300 ----------------------------------------------------- 00:14:09.300 Suppressions used: 00:14:09.300 count bytes template 00:14:09.300 1 11 /usr/src/fio/parse.c 00:14:09.300 1 8 libtcmalloc_minimal.so 00:14:09.300 1 904 libcrypto.so 00:14:09.300 ----------------------------------------------------- 00:14:09.300 00:14:09.300 21:46:31 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:09.300 21:46:31 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:09.300 21:46:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:09.300 21:46:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:09.300 21:46:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:09.300 21:46:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:09.300 21:46:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:09.300 21:46:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:09.300 21:46:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:09.300 21:46:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:09.300 21:46:31 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:09.300 21:46:31 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:09.300 21:46:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:09.300 21:46:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:09.300 21:46:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:09.300 21:46:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:09.300 21:46:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:09.300 21:46:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:09.300 21:46:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:09.300 21:46:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:09.300 21:46:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:09.300 { 00:14:09.300 "subsystems": [ 00:14:09.300 { 00:14:09.300 "subsystem": "bdev", 00:14:09.300 "config": [ 00:14:09.300 { 00:14:09.300 "params": { 00:14:09.300 "io_mechanism": "io_uring", 00:14:09.300 "conserve_cpu": true, 00:14:09.300 "filename": "/dev/nvme0n1", 00:14:09.300 "name": "xnvme_bdev" 00:14:09.300 }, 00:14:09.300 "method": "bdev_xnvme_create" 00:14:09.300 }, 00:14:09.300 { 00:14:09.300 "method": "bdev_wait_for_examine" 00:14:09.300 } 00:14:09.300 ] 00:14:09.300 } 00:14:09.300 ] 00:14:09.300 } 00:14:09.300 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:09.300 fio-3.35 00:14:09.300 Starting 1 thread 00:14:14.596 00:14:14.596 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81854: Wed Nov 27 21:46:37 2024 00:14:14.596 write: IOPS=33.7k, BW=132MiB/s (138MB/s)(659MiB/5002msec); 0 zone resets 00:14:14.596 slat (nsec): min=2933, max=99598, avg=3682.81, stdev=1951.47 00:14:14.596 clat (usec): min=1056, max=5947, avg=1748.62, stdev=263.68 00:14:14.596 lat (usec): min=1059, max=5950, avg=1752.30, stdev=263.96 00:14:14.596 clat percentiles (usec): 00:14:14.596 | 1.00th=[ 1287], 5.00th=[ 1385], 10.00th=[ 1450], 20.00th=[ 1532], 00:14:14.596 | 30.00th=[ 1598], 40.00th=[ 1663], 50.00th=[ 1713], 60.00th=[ 1778], 00:14:14.596 | 70.00th=[ 1844], 80.00th=[ 1942], 90.00th=[ 2114], 95.00th=[ 2212], 00:14:14.596 | 99.00th=[ 2474], 99.50th=[ 2573], 99.90th=[ 3294], 99.95th=[ 3752], 00:14:14.596 | 99.99th=[ 4293] 00:14:14.596 bw ( KiB/s): min=133357, max=137096, per=99.96%, avg=134863.67, stdev=1381.24, samples=9 00:14:14.596 iops : min=33341, max=34274, avg=33715.89, stdev=345.31, samples=9 00:14:14.596 lat (msec) : 2=84.01%, 4=15.95%, 10=0.04% 00:14:14.596 cpu : usr=64.73%, sys=31.49%, ctx=11, majf=0, minf=1064 00:14:14.596 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:14:14.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:14.596 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:14.596 issued rwts: total=0,168714,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:14.596 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:14.596 00:14:14.596 Run status group 0 (all jobs): 00:14:14.596 WRITE: bw=132MiB/s (138MB/s), 132MiB/s-132MiB/s (138MB/s-138MB/s), io=659MiB (691MB), run=5002-5002msec 00:14:14.596 ----------------------------------------------------- 00:14:14.596 Suppressions used: 00:14:14.596 count bytes template 00:14:14.596 1 11 /usr/src/fio/parse.c 00:14:14.596 1 8 libtcmalloc_minimal.so 00:14:14.596 1 904 libcrypto.so 00:14:14.596 ----------------------------------------------------- 00:14:14.596 00:14:14.596 ************************************ 00:14:14.596 END TEST xnvme_fio_plugin 00:14:14.596 ************************************ 00:14:14.596 00:14:14.596 real 0m12.000s 00:14:14.596 user 0m7.405s 00:14:14.596 sys 0m3.886s 00:14:14.596 21:46:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:14.596 21:46:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:14.596 21:46:37 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:14.596 21:46:37 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:14:14.596 21:46:37 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:14:14.596 21:46:37 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:14:14.596 21:46:37 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:14.596 21:46:37 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:14.596 21:46:37 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:14.596 21:46:37 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:14.596 21:46:37 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:14.596 21:46:37 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:14.596 21:46:37 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:14.596 21:46:37 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:14.596 ************************************ 00:14:14.596 START TEST xnvme_rpc 00:14:14.596 ************************************ 00:14:14.596 21:46:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:14.596 21:46:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:14.596 21:46:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:14.596 21:46:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:14.596 21:46:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:14.596 21:46:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=81935 00:14:14.596 21:46:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 81935 00:14:14.596 21:46:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 81935 ']' 00:14:14.596 21:46:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.596 21:46:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:14.596 21:46:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.596 21:46:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:14.596 21:46:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:14.596 21:46:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.858 [2024-11-27 21:46:37.750937] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:14:14.858 [2024-11-27 21:46:37.751640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81935 ] 00:14:14.858 [2024-11-27 21:46:37.899701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.858 [2024-11-27 21:46:37.928140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.803 xnvme_bdev 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 81935 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 81935 ']' 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 81935 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81935 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:15.803 killing process with pid 81935 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81935' 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 81935 00:14:15.803 21:46:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 81935 00:14:16.064 00:14:16.064 real 0m1.402s 00:14:16.064 user 0m1.487s 00:14:16.064 sys 0m0.399s 00:14:16.064 ************************************ 00:14:16.064 END TEST xnvme_rpc 00:14:16.064 ************************************ 00:14:16.064 21:46:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:16.064 21:46:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.064 21:46:39 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:16.064 21:46:39 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:16.064 21:46:39 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:16.064 21:46:39 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:16.064 ************************************ 00:14:16.064 START TEST xnvme_bdevperf 00:14:16.064 ************************************ 00:14:16.064 21:46:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:16.064 21:46:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:16.064 21:46:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:16.064 21:46:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:16.064 21:46:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:16.064 21:46:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:16.064 21:46:39 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:16.064 21:46:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:16.064 { 00:14:16.064 "subsystems": [ 00:14:16.064 { 00:14:16.064 "subsystem": "bdev", 00:14:16.064 "config": [ 00:14:16.064 { 00:14:16.064 "params": { 00:14:16.064 "io_mechanism": "io_uring_cmd", 00:14:16.064 "conserve_cpu": false, 00:14:16.064 "filename": "/dev/ng0n1", 00:14:16.064 "name": "xnvme_bdev" 00:14:16.064 }, 00:14:16.064 "method": "bdev_xnvme_create" 00:14:16.064 }, 00:14:16.064 { 00:14:16.064 "method": "bdev_wait_for_examine" 00:14:16.064 } 00:14:16.064 ] 00:14:16.064 } 00:14:16.064 ] 00:14:16.064 } 00:14:16.323 [2024-11-27 21:46:39.201039] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:14:16.323 [2024-11-27 21:46:39.201610] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81991 ] 00:14:16.323 [2024-11-27 21:46:39.349664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.323 [2024-11-27 21:46:39.380350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.584 Running I/O for 5 seconds... 00:14:18.473 32832.00 IOPS, 128.25 MiB/s [2024-11-27T21:46:42.536Z] 32895.50 IOPS, 128.50 MiB/s [2024-11-27T21:46:43.925Z] 33035.67 IOPS, 129.05 MiB/s [2024-11-27T21:46:44.499Z] 33143.25 IOPS, 129.47 MiB/s 00:14:21.378 Latency(us) 00:14:21.378 [2024-11-27T21:46:44.499Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.378 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:21.378 xnvme_bdev : 5.00 33106.86 129.32 0.00 0.00 1929.27 403.30 8922.98 00:14:21.378 [2024-11-27T21:46:44.499Z] =================================================================================================================== 00:14:21.378 [2024-11-27T21:46:44.499Z] Total : 33106.86 129.32 0.00 0.00 1929.27 403.30 8922.98 00:14:21.639 21:46:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:21.639 21:46:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:21.639 21:46:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:21.639 21:46:44 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:21.639 21:46:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:21.640 { 00:14:21.640 "subsystems": [ 00:14:21.640 { 00:14:21.640 "subsystem": "bdev", 00:14:21.640 "config": [ 00:14:21.640 { 00:14:21.640 "params": { 00:14:21.640 "io_mechanism": "io_uring_cmd", 00:14:21.640 "conserve_cpu": false, 00:14:21.640 "filename": "/dev/ng0n1", 00:14:21.640 "name": "xnvme_bdev" 00:14:21.640 }, 00:14:21.640 "method": "bdev_xnvme_create" 00:14:21.640 }, 00:14:21.640 { 00:14:21.640 "method": "bdev_wait_for_examine" 00:14:21.640 } 00:14:21.640 ] 00:14:21.640 } 00:14:21.640 ] 00:14:21.640 } 00:14:21.640 [2024-11-27 21:46:44.729163] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:14:21.640 [2024-11-27 21:46:44.729308] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82060 ] 00:14:21.901 [2024-11-27 21:46:44.876578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.901 [2024-11-27 21:46:44.905785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.901 Running I/O for 5 seconds... 00:14:24.233 34223.00 IOPS, 133.68 MiB/s [2024-11-27T21:46:48.300Z] 33953.00 IOPS, 132.63 MiB/s [2024-11-27T21:46:49.273Z] 34242.33 IOPS, 133.76 MiB/s [2024-11-27T21:46:50.216Z] 34640.75 IOPS, 135.32 MiB/s 00:14:27.095 Latency(us) 00:14:27.095 [2024-11-27T21:46:50.216Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:27.095 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:27.095 xnvme_bdev : 5.00 35297.62 137.88 0.00 0.00 1809.21 352.89 9578.34 00:14:27.095 [2024-11-27T21:46:50.216Z] =================================================================================================================== 00:14:27.095 [2024-11-27T21:46:50.216Z] Total : 35297.62 137.88 0.00 0.00 1809.21 352.89 9578.34 00:14:27.095 21:46:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:27.095 21:46:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:27.095 21:46:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:27.095 21:46:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:27.095 21:46:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:27.411 { 00:14:27.411 "subsystems": [ 00:14:27.411 { 00:14:27.411 "subsystem": "bdev", 00:14:27.411 "config": [ 00:14:27.411 { 00:14:27.411 "params": { 00:14:27.411 "io_mechanism": "io_uring_cmd", 00:14:27.411 "conserve_cpu": false, 00:14:27.411 "filename": "/dev/ng0n1", 00:14:27.411 "name": "xnvme_bdev" 00:14:27.411 }, 00:14:27.411 "method": "bdev_xnvme_create" 00:14:27.411 }, 00:14:27.411 { 00:14:27.411 "method": "bdev_wait_for_examine" 00:14:27.411 } 00:14:27.411 ] 00:14:27.411 } 00:14:27.411 ] 00:14:27.411 } 00:14:27.411 [2024-11-27 21:46:50.260041] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:14:27.411 [2024-11-27 21:46:50.260181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82124 ] 00:14:27.411 [2024-11-27 21:46:50.408254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.411 [2024-11-27 21:46:50.437552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.765 Running I/O for 5 seconds... 00:14:29.649 77312.00 IOPS, 302.00 MiB/s [2024-11-27T21:46:53.711Z] 77792.00 IOPS, 303.88 MiB/s [2024-11-27T21:46:54.651Z] 78272.00 IOPS, 305.75 MiB/s [2024-11-27T21:46:55.591Z] 78112.00 IOPS, 305.12 MiB/s [2024-11-27T21:46:55.591Z] 77580.80 IOPS, 303.05 MiB/s 00:14:32.470 Latency(us) 00:14:32.470 [2024-11-27T21:46:55.591Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:32.470 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:32.470 xnvme_bdev : 5.00 77546.00 302.91 0.00 0.00 821.86 507.27 2848.30 00:14:32.470 [2024-11-27T21:46:55.591Z] =================================================================================================================== 00:14:32.470 [2024-11-27T21:46:55.591Z] Total : 77546.00 302.91 0.00 0.00 821.86 507.27 2848.30 00:14:32.730 21:46:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:32.730 21:46:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:32.730 21:46:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:32.730 21:46:55 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:32.730 21:46:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:32.730 { 00:14:32.730 "subsystems": [ 00:14:32.730 { 00:14:32.730 "subsystem": "bdev", 00:14:32.730 "config": [ 00:14:32.730 { 00:14:32.730 "params": { 00:14:32.730 "io_mechanism": "io_uring_cmd", 00:14:32.730 "conserve_cpu": false, 00:14:32.730 "filename": "/dev/ng0n1", 00:14:32.730 "name": "xnvme_bdev" 00:14:32.730 }, 00:14:32.730 "method": "bdev_xnvme_create" 00:14:32.730 }, 00:14:32.730 { 00:14:32.730 "method": "bdev_wait_for_examine" 00:14:32.730 } 00:14:32.730 ] 00:14:32.730 } 00:14:32.730 ] 00:14:32.730 } 00:14:32.730 [2024-11-27 21:46:55.792495] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:14:32.730 [2024-11-27 21:46:55.792630] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82187 ] 00:14:32.991 [2024-11-27 21:46:55.940235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.991 [2024-11-27 21:46:55.969602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.991 Running I/O for 5 seconds... 00:14:35.312 42522.00 IOPS, 166.10 MiB/s [2024-11-27T21:46:59.375Z] 41094.50 IOPS, 160.53 MiB/s [2024-11-27T21:47:00.320Z] 40198.67 IOPS, 157.03 MiB/s [2024-11-27T21:47:01.260Z] 39724.50 IOPS, 155.17 MiB/s [2024-11-27T21:47:01.260Z] 39253.20 IOPS, 153.33 MiB/s 00:14:38.139 Latency(us) 00:14:38.139 [2024-11-27T21:47:01.260Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.139 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:38.139 xnvme_bdev : 5.00 39230.10 153.24 0.00 0.00 1626.99 176.44 18047.61 00:14:38.139 [2024-11-27T21:47:01.260Z] =================================================================================================================== 00:14:38.139 [2024-11-27T21:47:01.260Z] Total : 39230.10 153.24 0.00 0.00 1626.99 176.44 18047.61 00:14:38.399 ************************************ 00:14:38.399 END TEST xnvme_bdevperf 00:14:38.399 ************************************ 00:14:38.399 00:14:38.399 real 0m22.129s 00:14:38.399 user 0m10.744s 00:14:38.399 sys 0m10.878s 00:14:38.399 21:47:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:38.399 21:47:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:38.399 21:47:01 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:38.399 21:47:01 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:38.399 21:47:01 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:38.399 21:47:01 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:38.399 ************************************ 00:14:38.399 START TEST xnvme_fio_plugin 00:14:38.399 ************************************ 00:14:38.399 21:47:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:38.399 21:47:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:38.399 21:47:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:38.399 21:47:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:38.399 21:47:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:38.399 21:47:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:38.399 21:47:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:38.399 21:47:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:38.399 21:47:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:38.399 21:47:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:38.400 21:47:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:38.400 21:47:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:38.400 21:47:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:38.400 21:47:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:38.400 21:47:01 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:38.400 21:47:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:38.400 21:47:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:38.400 21:47:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:38.400 21:47:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:38.400 21:47:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:38.400 21:47:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:38.400 21:47:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:38.400 21:47:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:38.400 21:47:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:38.400 { 00:14:38.400 "subsystems": [ 00:14:38.400 { 00:14:38.400 "subsystem": "bdev", 00:14:38.400 "config": [ 00:14:38.400 { 00:14:38.400 "params": { 00:14:38.400 "io_mechanism": "io_uring_cmd", 00:14:38.400 "conserve_cpu": false, 00:14:38.400 "filename": "/dev/ng0n1", 00:14:38.400 "name": "xnvme_bdev" 00:14:38.400 }, 00:14:38.400 "method": "bdev_xnvme_create" 00:14:38.400 }, 00:14:38.400 { 00:14:38.400 "method": "bdev_wait_for_examine" 00:14:38.400 } 00:14:38.400 ] 00:14:38.400 } 00:14:38.400 ] 00:14:38.400 } 00:14:38.662 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:38.662 fio-3.35 00:14:38.662 Starting 1 thread 00:14:43.954 00:14:43.954 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82289: Wed Nov 27 21:47:06 2024 00:14:43.954 read: IOPS=38.2k, BW=149MiB/s (157MB/s)(747MiB/5001msec) 00:14:43.954 slat (usec): min=2, max=247, avg= 3.41, stdev= 1.74 00:14:43.954 clat (usec): min=809, max=3506, avg=1539.36, stdev=326.12 00:14:43.954 lat (usec): min=812, max=3538, avg=1542.77, stdev=326.44 00:14:43.954 clat percentiles (usec): 00:14:43.954 | 1.00th=[ 979], 5.00th=[ 1074], 10.00th=[ 1139], 20.00th=[ 1237], 00:14:43.954 | 30.00th=[ 1336], 40.00th=[ 1418], 50.00th=[ 1516], 60.00th=[ 1598], 00:14:43.954 | 70.00th=[ 1696], 80.00th=[ 1811], 90.00th=[ 1975], 95.00th=[ 2114], 00:14:43.954 | 99.00th=[ 2409], 99.50th=[ 2573], 99.90th=[ 2999], 99.95th=[ 3130], 00:14:43.954 | 99.99th=[ 3326] 00:14:43.954 bw ( KiB/s): min=135168, max=167936, per=99.30%, avg=151836.44, stdev=13051.26, samples=9 00:14:43.954 iops : min=33792, max=41984, avg=37959.11, stdev=3262.81, samples=9 00:14:43.954 lat (usec) : 1000=1.45% 00:14:43.954 lat (msec) : 2=89.77%, 4=8.77% 00:14:43.954 cpu : usr=38.66%, sys=60.26%, ctx=9, majf=0, minf=1063 00:14:43.954 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:43.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:43.954 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:43.954 issued rwts: total=191168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:43.954 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:43.954 00:14:43.954 Run status group 0 (all jobs): 00:14:43.954 READ: bw=149MiB/s (157MB/s), 149MiB/s-149MiB/s (157MB/s-157MB/s), io=747MiB (783MB), run=5001-5001msec 00:14:44.215 ----------------------------------------------------- 00:14:44.215 Suppressions used: 00:14:44.215 count bytes template 00:14:44.215 1 11 /usr/src/fio/parse.c 00:14:44.215 1 8 libtcmalloc_minimal.so 00:14:44.215 1 904 libcrypto.so 00:14:44.215 ----------------------------------------------------- 00:14:44.215 00:14:44.476 21:47:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:44.476 21:47:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:44.476 21:47:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:44.476 21:47:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:44.476 21:47:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:44.476 21:47:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:44.476 21:47:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:44.476 21:47:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:44.476 21:47:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:44.477 21:47:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:44.477 21:47:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:44.477 21:47:07 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:44.477 21:47:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:44.477 21:47:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:44.477 21:47:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:44.477 21:47:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:44.477 21:47:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:44.477 21:47:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:44.477 21:47:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:44.477 21:47:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:44.477 21:47:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:44.477 { 00:14:44.477 "subsystems": [ 00:14:44.477 { 00:14:44.477 "subsystem": "bdev", 00:14:44.477 "config": [ 00:14:44.477 { 00:14:44.477 "params": { 00:14:44.477 "io_mechanism": "io_uring_cmd", 00:14:44.477 "conserve_cpu": false, 00:14:44.477 "filename": "/dev/ng0n1", 00:14:44.477 "name": "xnvme_bdev" 00:14:44.477 }, 00:14:44.477 "method": "bdev_xnvme_create" 00:14:44.477 }, 00:14:44.477 { 00:14:44.477 "method": "bdev_wait_for_examine" 00:14:44.477 } 00:14:44.477 ] 00:14:44.477 } 00:14:44.477 ] 00:14:44.477 } 00:14:44.477 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:44.477 fio-3.35 00:14:44.477 Starting 1 thread 00:14:51.067 00:14:51.067 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82374: Wed Nov 27 21:47:12 2024 00:14:51.067 write: IOPS=35.9k, BW=140MiB/s (147MB/s)(702MiB/5001msec); 0 zone resets 00:14:51.067 slat (nsec): min=2926, max=69600, avg=4005.76, stdev=2298.39 00:14:51.067 clat (usec): min=159, max=5768, avg=1620.00, stdev=294.68 00:14:51.067 lat (usec): min=163, max=5772, avg=1624.01, stdev=295.08 00:14:51.067 clat percentiles (usec): 00:14:51.067 | 1.00th=[ 1057], 5.00th=[ 1221], 10.00th=[ 1303], 20.00th=[ 1385], 00:14:51.067 | 30.00th=[ 1467], 40.00th=[ 1532], 50.00th=[ 1598], 60.00th=[ 1663], 00:14:51.067 | 70.00th=[ 1729], 80.00th=[ 1827], 90.00th=[ 1975], 95.00th=[ 2114], 00:14:51.067 | 99.00th=[ 2507], 99.50th=[ 2769], 99.90th=[ 3425], 99.95th=[ 3654], 00:14:51.067 | 99.99th=[ 4555] 00:14:51.067 bw ( KiB/s): min=138360, max=150608, per=100.00%, avg=144005.33, stdev=4416.23, samples=9 00:14:51.067 iops : min=34590, max=37652, avg=36001.33, stdev=1104.06, samples=9 00:14:51.067 lat (usec) : 250=0.01%, 500=0.03%, 750=0.20%, 1000=0.39% 00:14:51.067 lat (msec) : 2=90.73%, 4=8.61%, 10=0.02% 00:14:51.067 cpu : usr=37.96%, sys=60.62%, ctx=9, majf=0, minf=1064 00:14:51.067 IO depths : 1=1.5%, 2=3.0%, 4=6.0%, 8=12.1%, 16=24.6%, 32=51.2%, >=64=1.7% 00:14:51.067 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:51.067 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:51.067 issued rwts: total=0,179678,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:51.067 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:51.067 00:14:51.067 Run status group 0 (all jobs): 00:14:51.067 WRITE: bw=140MiB/s (147MB/s), 140MiB/s-140MiB/s (147MB/s-147MB/s), io=702MiB (736MB), run=5001-5001msec 00:14:51.067 ----------------------------------------------------- 00:14:51.067 Suppressions used: 00:14:51.067 count bytes template 00:14:51.067 1 11 /usr/src/fio/parse.c 00:14:51.067 1 8 libtcmalloc_minimal.so 00:14:51.067 1 904 libcrypto.so 00:14:51.067 ----------------------------------------------------- 00:14:51.067 00:14:51.067 00:14:51.067 real 0m11.992s 00:14:51.067 user 0m4.963s 00:14:51.067 sys 0m6.584s 00:14:51.067 21:47:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:51.067 ************************************ 00:14:51.067 END TEST xnvme_fio_plugin 00:14:51.067 ************************************ 00:14:51.067 21:47:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:51.067 21:47:13 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:51.067 21:47:13 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:51.067 21:47:13 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:51.067 21:47:13 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:51.067 21:47:13 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:51.067 21:47:13 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:51.067 21:47:13 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:51.067 ************************************ 00:14:51.067 START TEST xnvme_rpc 00:14:51.067 ************************************ 00:14:51.067 21:47:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:51.067 21:47:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:51.067 21:47:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:51.067 21:47:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:51.067 21:47:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:51.067 21:47:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=82454 00:14:51.068 21:47:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 82454 00:14:51.068 21:47:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 82454 ']' 00:14:51.068 21:47:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.068 21:47:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:51.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.068 21:47:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.068 21:47:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:51.068 21:47:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.068 21:47:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:51.068 [2024-11-27 21:47:13.473257] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:14:51.068 [2024-11-27 21:47:13.473430] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82454 ] 00:14:51.068 [2024-11-27 21:47:13.614027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.068 [2024-11-27 21:47:13.643193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.329 21:47:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:51.329 21:47:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:51.329 21:47:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:14:51.329 21:47:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.329 21:47:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.329 xnvme_bdev 00:14:51.329 21:47:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.329 21:47:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:51.329 21:47:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:51.329 21:47:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.329 21:47:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:51.329 21:47:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.329 21:47:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.329 21:47:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:51.329 21:47:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:51.329 21:47:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:51.329 21:47:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.329 21:47:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.329 21:47:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:51.329 21:47:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.329 21:47:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:51.329 21:47:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:51.329 21:47:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:51.329 21:47:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.329 21:47:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.329 21:47:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:51.329 21:47:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.590 21:47:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:51.590 21:47:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:51.590 21:47:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:51.590 21:47:14 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:51.590 21:47:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.590 21:47:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.590 21:47:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.590 21:47:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:51.590 21:47:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:51.590 21:47:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.590 21:47:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.590 21:47:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.590 21:47:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 82454 00:14:51.590 21:47:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 82454 ']' 00:14:51.590 21:47:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 82454 00:14:51.590 21:47:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:51.590 21:47:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:51.590 21:47:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82454 00:14:51.590 21:47:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:51.590 21:47:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:51.590 killing process with pid 82454 00:14:51.590 21:47:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82454' 00:14:51.590 21:47:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 82454 00:14:51.590 21:47:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 82454 00:14:51.851 00:14:51.851 real 0m1.405s 00:14:51.851 user 0m1.536s 00:14:51.851 sys 0m0.356s 00:14:51.851 ************************************ 00:14:51.851 END TEST xnvme_rpc 00:14:51.851 ************************************ 00:14:51.851 21:47:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:51.851 21:47:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.851 21:47:14 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:51.851 21:47:14 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:51.851 21:47:14 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:51.851 21:47:14 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:51.851 ************************************ 00:14:51.851 START TEST xnvme_bdevperf 00:14:51.851 ************************************ 00:14:51.851 21:47:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:51.851 21:47:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:51.851 21:47:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:51.851 21:47:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:51.851 21:47:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:51.851 21:47:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:51.851 21:47:14 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:51.851 21:47:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:51.851 { 00:14:51.851 "subsystems": [ 00:14:51.851 { 00:14:51.851 "subsystem": "bdev", 00:14:51.851 "config": [ 00:14:51.851 { 00:14:51.851 "params": { 00:14:51.851 "io_mechanism": "io_uring_cmd", 00:14:51.851 "conserve_cpu": true, 00:14:51.851 "filename": "/dev/ng0n1", 00:14:51.851 "name": "xnvme_bdev" 00:14:51.851 }, 00:14:51.851 "method": "bdev_xnvme_create" 00:14:51.851 }, 00:14:51.851 { 00:14:51.851 "method": "bdev_wait_for_examine" 00:14:51.851 } 00:14:51.851 ] 00:14:51.851 } 00:14:51.851 ] 00:14:51.851 } 00:14:51.851 [2024-11-27 21:47:14.930562] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:14:51.851 [2024-11-27 21:47:14.930738] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82506 ] 00:14:52.113 [2024-11-27 21:47:15.079959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.113 [2024-11-27 21:47:15.108527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.113 Running I/O for 5 seconds... 00:14:54.442 36480.00 IOPS, 142.50 MiB/s [2024-11-27T21:47:18.507Z] 35744.00 IOPS, 139.62 MiB/s [2024-11-27T21:47:19.449Z] 35944.00 IOPS, 140.41 MiB/s [2024-11-27T21:47:20.391Z] 36106.25 IOPS, 141.04 MiB/s [2024-11-27T21:47:20.392Z] 36309.00 IOPS, 141.83 MiB/s 00:14:57.271 Latency(us) 00:14:57.271 [2024-11-27T21:47:20.392Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.271 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:57.271 xnvme_bdev : 5.01 36293.20 141.77 0.00 0.00 1759.11 702.62 4713.55 00:14:57.271 [2024-11-27T21:47:20.392Z] =================================================================================================================== 00:14:57.271 [2024-11-27T21:47:20.392Z] Total : 36293.20 141.77 0.00 0.00 1759.11 702.62 4713.55 00:14:57.532 21:47:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:57.532 21:47:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:57.532 21:47:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:57.532 21:47:20 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:57.532 21:47:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:57.532 { 00:14:57.532 "subsystems": [ 00:14:57.532 { 00:14:57.532 "subsystem": "bdev", 00:14:57.532 "config": [ 00:14:57.532 { 00:14:57.532 "params": { 00:14:57.532 "io_mechanism": "io_uring_cmd", 00:14:57.532 "conserve_cpu": true, 00:14:57.532 "filename": "/dev/ng0n1", 00:14:57.532 "name": "xnvme_bdev" 00:14:57.532 }, 00:14:57.532 "method": "bdev_xnvme_create" 00:14:57.532 }, 00:14:57.532 { 00:14:57.532 "method": "bdev_wait_for_examine" 00:14:57.532 } 00:14:57.532 ] 00:14:57.532 } 00:14:57.532 ] 00:14:57.532 } 00:14:57.532 [2024-11-27 21:47:20.480533] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:14:57.532 [2024-11-27 21:47:20.480716] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82570 ] 00:14:57.533 [2024-11-27 21:47:20.628089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.794 [2024-11-27 21:47:20.657116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.794 Running I/O for 5 seconds... 00:14:59.681 37680.00 IOPS, 147.19 MiB/s [2024-11-27T21:47:24.187Z] 37815.50 IOPS, 147.72 MiB/s [2024-11-27T21:47:25.129Z] 37530.33 IOPS, 146.60 MiB/s [2024-11-27T21:47:26.073Z] 37422.00 IOPS, 146.18 MiB/s 00:15:02.952 Latency(us) 00:15:02.952 [2024-11-27T21:47:26.073Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:02.952 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:02.952 xnvme_bdev : 5.00 37118.89 145.00 0.00 0.00 1719.39 677.42 8368.44 00:15:02.952 [2024-11-27T21:47:26.073Z] =================================================================================================================== 00:15:02.952 [2024-11-27T21:47:26.073Z] Total : 37118.89 145.00 0.00 0.00 1719.39 677.42 8368.44 00:15:02.952 21:47:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:02.952 21:47:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:15:02.952 21:47:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:02.952 21:47:25 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:02.952 21:47:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:02.952 { 00:15:02.952 "subsystems": [ 00:15:02.952 { 00:15:02.952 "subsystem": "bdev", 00:15:02.952 "config": [ 00:15:02.952 { 00:15:02.952 "params": { 00:15:02.952 "io_mechanism": "io_uring_cmd", 00:15:02.952 "conserve_cpu": true, 00:15:02.952 "filename": "/dev/ng0n1", 00:15:02.952 "name": "xnvme_bdev" 00:15:02.952 }, 00:15:02.952 "method": "bdev_xnvme_create" 00:15:02.952 }, 00:15:02.952 { 00:15:02.952 "method": "bdev_wait_for_examine" 00:15:02.952 } 00:15:02.952 ] 00:15:02.952 } 00:15:02.952 ] 00:15:02.952 } 00:15:02.952 [2024-11-27 21:47:26.017607] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:15:02.952 [2024-11-27 21:47:26.017741] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82638 ] 00:15:03.213 [2024-11-27 21:47:26.164483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.213 [2024-11-27 21:47:26.192634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.213 Running I/O for 5 seconds... 00:15:05.544 78016.00 IOPS, 304.75 MiB/s [2024-11-27T21:47:29.610Z] 78240.00 IOPS, 305.62 MiB/s [2024-11-27T21:47:30.550Z] 78442.67 IOPS, 306.42 MiB/s [2024-11-27T21:47:31.484Z] 78736.00 IOPS, 307.56 MiB/s [2024-11-27T21:47:31.484Z] 81676.80 IOPS, 319.05 MiB/s 00:15:08.363 Latency(us) 00:15:08.363 [2024-11-27T21:47:31.484Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:08.363 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:15:08.363 xnvme_bdev : 5.00 81637.12 318.90 0.00 0.00 780.47 401.72 2785.28 00:15:08.363 [2024-11-27T21:47:31.484Z] =================================================================================================================== 00:15:08.363 [2024-11-27T21:47:31.484Z] Total : 81637.12 318.90 0.00 0.00 780.47 401.72 2785.28 00:15:08.363 21:47:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:08.363 21:47:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:15:08.363 21:47:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:08.363 21:47:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:08.363 21:47:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:08.363 { 00:15:08.363 "subsystems": [ 00:15:08.363 { 00:15:08.363 "subsystem": "bdev", 00:15:08.363 "config": [ 00:15:08.363 { 00:15:08.363 "params": { 00:15:08.363 "io_mechanism": "io_uring_cmd", 00:15:08.363 "conserve_cpu": true, 00:15:08.363 "filename": "/dev/ng0n1", 00:15:08.363 "name": "xnvme_bdev" 00:15:08.363 }, 00:15:08.363 "method": "bdev_xnvme_create" 00:15:08.363 }, 00:15:08.363 { 00:15:08.363 "method": "bdev_wait_for_examine" 00:15:08.363 } 00:15:08.363 ] 00:15:08.363 } 00:15:08.363 ] 00:15:08.363 } 00:15:08.363 [2024-11-27 21:47:31.468783] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:15:08.363 [2024-11-27 21:47:31.468879] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82701 ] 00:15:08.621 [2024-11-27 21:47:31.601863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.621 [2024-11-27 21:47:31.619723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.621 Running I/O for 5 seconds... 00:15:10.934 56785.00 IOPS, 221.82 MiB/s [2024-11-27T21:47:34.993Z] 53705.50 IOPS, 209.79 MiB/s [2024-11-27T21:47:35.928Z] 50688.67 IOPS, 198.00 MiB/s [2024-11-27T21:47:36.869Z] 50325.00 IOPS, 196.58 MiB/s [2024-11-27T21:47:36.869Z] 47605.40 IOPS, 185.96 MiB/s 00:15:13.748 Latency(us) 00:15:13.748 [2024-11-27T21:47:36.869Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.748 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:13.748 xnvme_bdev : 5.01 47543.79 185.72 0.00 0.00 1340.32 52.38 20971.52 00:15:13.748 [2024-11-27T21:47:36.869Z] =================================================================================================================== 00:15:13.748 [2024-11-27T21:47:36.869Z] Total : 47543.79 185.72 0.00 0.00 1340.32 52.38 20971.52 00:15:13.748 00:15:13.748 real 0m22.005s 00:15:13.748 user 0m12.561s 00:15:13.748 sys 0m7.333s 00:15:14.009 ************************************ 00:15:14.009 END TEST xnvme_bdevperf 00:15:14.009 ************************************ 00:15:14.009 21:47:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:14.009 21:47:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:14.009 21:47:36 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:14.009 21:47:36 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:14.009 21:47:36 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:14.009 21:47:36 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:14.009 ************************************ 00:15:14.009 START TEST xnvme_fio_plugin 00:15:14.009 ************************************ 00:15:14.009 21:47:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:14.009 21:47:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:14.009 21:47:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:14.009 21:47:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:14.009 21:47:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:14.009 21:47:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:14.009 21:47:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:14.009 21:47:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:14.009 21:47:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:14.009 21:47:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:14.009 21:47:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:14.009 21:47:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:14.009 21:47:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:14.009 21:47:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:14.009 21:47:36 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:14.009 21:47:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:14.009 21:47:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:14.009 21:47:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:14.009 21:47:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:14.009 21:47:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:14.009 21:47:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:14.009 21:47:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:14.009 21:47:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:14.010 21:47:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:14.010 { 00:15:14.010 "subsystems": [ 00:15:14.010 { 00:15:14.010 "subsystem": "bdev", 00:15:14.010 "config": [ 00:15:14.010 { 00:15:14.010 "params": { 00:15:14.010 "io_mechanism": "io_uring_cmd", 00:15:14.010 "conserve_cpu": true, 00:15:14.010 "filename": "/dev/ng0n1", 00:15:14.010 "name": "xnvme_bdev" 00:15:14.010 }, 00:15:14.010 "method": "bdev_xnvme_create" 00:15:14.010 }, 00:15:14.010 { 00:15:14.010 "method": "bdev_wait_for_examine" 00:15:14.010 } 00:15:14.010 ] 00:15:14.010 } 00:15:14.010 ] 00:15:14.010 } 00:15:14.271 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:14.271 fio-3.35 00:15:14.271 Starting 1 thread 00:15:19.565 00:15:19.565 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82808: Wed Nov 27 21:47:42 2024 00:15:19.565 read: IOPS=41.9k, BW=163MiB/s (171MB/s)(818MiB/5001msec) 00:15:19.565 slat (nsec): min=2900, max=86421, avg=3354.60, stdev=1474.73 00:15:19.565 clat (usec): min=684, max=3065, avg=1395.88, stdev=255.36 00:15:19.565 lat (usec): min=687, max=3098, avg=1399.23, stdev=255.66 00:15:19.565 clat percentiles (usec): 00:15:19.565 | 1.00th=[ 1012], 5.00th=[ 1074], 10.00th=[ 1123], 20.00th=[ 1188], 00:15:19.565 | 30.00th=[ 1237], 40.00th=[ 1287], 50.00th=[ 1336], 60.00th=[ 1418], 00:15:19.565 | 70.00th=[ 1483], 80.00th=[ 1598], 90.00th=[ 1745], 95.00th=[ 1876], 00:15:19.565 | 99.00th=[ 2180], 99.50th=[ 2278], 99.90th=[ 2540], 99.95th=[ 2704], 00:15:19.565 | 99.99th=[ 2933] 00:15:19.565 bw ( KiB/s): min=148480, max=180736, per=98.68%, avg=165205.33, stdev=12720.39, samples=9 00:15:19.565 iops : min=37120, max=45184, avg=41301.33, stdev=3180.10, samples=9 00:15:19.565 lat (usec) : 750=0.01%, 1000=0.80% 00:15:19.565 lat (msec) : 2=96.54%, 4=2.65% 00:15:19.565 cpu : usr=73.80%, sys=23.36%, ctx=32, majf=0, minf=1063 00:15:19.565 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:19.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:19.565 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:19.565 issued rwts: total=209321,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:19.565 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:19.565 00:15:19.565 Run status group 0 (all jobs): 00:15:19.565 READ: bw=163MiB/s (171MB/s), 163MiB/s-163MiB/s (171MB/s-171MB/s), io=818MiB (857MB), run=5001-5001msec 00:15:19.827 ----------------------------------------------------- 00:15:19.827 Suppressions used: 00:15:19.827 count bytes template 00:15:19.827 1 11 /usr/src/fio/parse.c 00:15:19.827 1 8 libtcmalloc_minimal.so 00:15:19.827 1 904 libcrypto.so 00:15:19.827 ----------------------------------------------------- 00:15:19.827 00:15:20.088 21:47:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:20.088 21:47:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:20.088 21:47:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:20.088 21:47:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:20.088 21:47:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:20.088 21:47:42 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:20.088 21:47:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:20.088 21:47:42 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:20.088 21:47:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:20.088 21:47:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:20.088 21:47:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:20.088 21:47:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:20.088 21:47:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:20.088 21:47:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:20.088 21:47:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:20.088 21:47:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:20.088 21:47:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:20.088 21:47:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:20.088 21:47:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:20.088 21:47:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:20.089 21:47:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:20.089 { 00:15:20.089 "subsystems": [ 00:15:20.089 { 00:15:20.089 "subsystem": "bdev", 00:15:20.089 "config": [ 00:15:20.089 { 00:15:20.089 "params": { 00:15:20.089 "io_mechanism": "io_uring_cmd", 00:15:20.089 "conserve_cpu": true, 00:15:20.089 "filename": "/dev/ng0n1", 00:15:20.089 "name": "xnvme_bdev" 00:15:20.089 }, 00:15:20.089 "method": "bdev_xnvme_create" 00:15:20.089 }, 00:15:20.089 { 00:15:20.089 "method": "bdev_wait_for_examine" 00:15:20.089 } 00:15:20.089 ] 00:15:20.089 } 00:15:20.089 ] 00:15:20.089 } 00:15:20.089 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:20.089 fio-3.35 00:15:20.089 Starting 1 thread 00:15:25.508 00:15:25.508 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82888: Wed Nov 27 21:47:48 2024 00:15:25.508 write: IOPS=40.4k, BW=158MiB/s (165MB/s)(790MiB/5005msec); 0 zone resets 00:15:25.508 slat (usec): min=2, max=147, avg= 4.03, stdev= 2.30 00:15:25.508 clat (usec): min=202, max=6955, avg=1425.57, stdev=287.59 00:15:25.508 lat (usec): min=205, max=6959, avg=1429.60, stdev=288.18 00:15:25.508 clat percentiles (usec): 00:15:25.508 | 1.00th=[ 1012], 5.00th=[ 1074], 10.00th=[ 1123], 20.00th=[ 1188], 00:15:25.508 | 30.00th=[ 1254], 40.00th=[ 1303], 50.00th=[ 1385], 60.00th=[ 1450], 00:15:25.508 | 70.00th=[ 1532], 80.00th=[ 1631], 90.00th=[ 1778], 95.00th=[ 1909], 00:15:25.508 | 99.00th=[ 2212], 99.50th=[ 2376], 99.90th=[ 3326], 99.95th=[ 4113], 00:15:25.508 | 99.99th=[ 6783] 00:15:25.508 bw ( KiB/s): min=149368, max=180872, per=99.66%, avg=160988.44, stdev=11339.92, samples=9 00:15:25.508 iops : min=37342, max=45218, avg=40247.11, stdev=2834.98, samples=9 00:15:25.508 lat (usec) : 250=0.01%, 500=0.02%, 750=0.03%, 1000=0.75% 00:15:25.508 lat (msec) : 2=96.23%, 4=2.92%, 10=0.06% 00:15:25.508 cpu : usr=62.01%, sys=32.61%, ctx=14, majf=0, minf=1064 00:15:25.508 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.4%, 16=25.0%, 32=50.3%, >=64=1.6% 00:15:25.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:25.508 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:25.508 issued rwts: total=0,202119,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:25.508 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:25.508 00:15:25.508 Run status group 0 (all jobs): 00:15:25.508 WRITE: bw=158MiB/s (165MB/s), 158MiB/s-158MiB/s (165MB/s-165MB/s), io=790MiB (828MB), run=5005-5005msec 00:15:26.080 ----------------------------------------------------- 00:15:26.080 Suppressions used: 00:15:26.080 count bytes template 00:15:26.080 1 11 /usr/src/fio/parse.c 00:15:26.080 1 8 libtcmalloc_minimal.so 00:15:26.080 1 904 libcrypto.so 00:15:26.080 ----------------------------------------------------- 00:15:26.080 00:15:26.080 ************************************ 00:15:26.080 END TEST xnvme_fio_plugin 00:15:26.080 ************************************ 00:15:26.080 00:15:26.080 real 0m12.020s 00:15:26.080 user 0m7.917s 00:15:26.080 sys 0m3.377s 00:15:26.080 21:47:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:26.080 21:47:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:26.080 Process with pid 82454 is not found 00:15:26.080 21:47:49 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 82454 00:15:26.080 21:47:49 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 82454 ']' 00:15:26.080 21:47:49 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 82454 00:15:26.080 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (82454) - No such process 00:15:26.080 21:47:49 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 82454 is not found' 00:15:26.080 00:15:26.080 real 2m57.195s 00:15:26.080 user 1m26.447s 00:15:26.080 sys 1m16.327s 00:15:26.080 ************************************ 00:15:26.080 END TEST nvme_xnvme 00:15:26.080 ************************************ 00:15:26.080 21:47:49 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:26.080 21:47:49 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:26.080 21:47:49 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:26.080 21:47:49 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:26.080 21:47:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:26.080 21:47:49 -- common/autotest_common.sh@10 -- # set +x 00:15:26.080 ************************************ 00:15:26.080 START TEST blockdev_xnvme 00:15:26.080 ************************************ 00:15:26.080 21:47:49 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:26.080 * Looking for test storage... 00:15:26.080 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:26.080 21:47:49 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:26.080 21:47:49 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:15:26.080 21:47:49 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:26.341 21:47:49 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:26.341 21:47:49 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:26.341 21:47:49 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:26.341 21:47:49 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:26.341 21:47:49 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:26.341 21:47:49 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:26.341 21:47:49 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:26.341 21:47:49 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:26.341 21:47:49 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:26.341 21:47:49 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:26.341 21:47:49 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:26.341 21:47:49 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:26.341 21:47:49 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:26.341 21:47:49 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:15:26.341 21:47:49 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:26.341 21:47:49 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:26.341 21:47:49 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:26.341 21:47:49 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:26.341 21:47:49 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:26.341 21:47:49 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:26.341 21:47:49 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:26.341 21:47:49 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:26.341 21:47:49 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:26.341 21:47:49 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:26.341 21:47:49 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:26.341 21:47:49 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:26.341 21:47:49 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:26.341 21:47:49 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:26.341 21:47:49 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:15:26.341 21:47:49 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:26.341 21:47:49 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:26.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.341 --rc genhtml_branch_coverage=1 00:15:26.341 --rc genhtml_function_coverage=1 00:15:26.341 --rc genhtml_legend=1 00:15:26.341 --rc geninfo_all_blocks=1 00:15:26.341 --rc geninfo_unexecuted_blocks=1 00:15:26.341 00:15:26.341 ' 00:15:26.341 21:47:49 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:26.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.341 --rc genhtml_branch_coverage=1 00:15:26.341 --rc genhtml_function_coverage=1 00:15:26.341 --rc genhtml_legend=1 00:15:26.341 --rc geninfo_all_blocks=1 00:15:26.342 --rc geninfo_unexecuted_blocks=1 00:15:26.342 00:15:26.342 ' 00:15:26.342 21:47:49 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:26.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.342 --rc genhtml_branch_coverage=1 00:15:26.342 --rc genhtml_function_coverage=1 00:15:26.342 --rc genhtml_legend=1 00:15:26.342 --rc geninfo_all_blocks=1 00:15:26.342 --rc geninfo_unexecuted_blocks=1 00:15:26.342 00:15:26.342 ' 00:15:26.342 21:47:49 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:26.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.342 --rc genhtml_branch_coverage=1 00:15:26.342 --rc genhtml_function_coverage=1 00:15:26.342 --rc genhtml_legend=1 00:15:26.342 --rc geninfo_all_blocks=1 00:15:26.342 --rc geninfo_unexecuted_blocks=1 00:15:26.342 00:15:26.342 ' 00:15:26.342 21:47:49 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:26.342 21:47:49 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:26.342 21:47:49 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:26.342 21:47:49 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:26.342 21:47:49 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:26.342 21:47:49 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:26.342 21:47:49 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:26.342 21:47:49 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:26.342 21:47:49 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:26.342 21:47:49 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:15:26.342 21:47:49 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:15:26.342 21:47:49 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:15:26.342 21:47:49 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:15:26.342 21:47:49 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:15:26.342 21:47:49 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:15:26.342 21:47:49 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:15:26.342 21:47:49 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:15:26.342 21:47:49 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:15:26.342 21:47:49 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:15:26.342 21:47:49 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:15:26.342 21:47:49 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:15:26.342 21:47:49 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:15:26.342 21:47:49 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:15:26.342 21:47:49 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:15:26.342 21:47:49 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=83018 00:15:26.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.342 21:47:49 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:26.342 21:47:49 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 83018 00:15:26.342 21:47:49 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 83018 ']' 00:15:26.342 21:47:49 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.342 21:47:49 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:26.342 21:47:49 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.342 21:47:49 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:26.342 21:47:49 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:26.342 21:47:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:26.342 [2024-11-27 21:47:49.331393] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:15:26.342 [2024-11-27 21:47:49.331541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83018 ] 00:15:26.603 [2024-11-27 21:47:49.471802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.603 [2024-11-27 21:47:49.501902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.175 21:47:50 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:27.175 21:47:50 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:15:27.175 21:47:50 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:15:27.175 21:47:50 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:15:27.175 21:47:50 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:27.175 21:47:50 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:27.175 21:47:50 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:27.745 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:28.317 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:15:28.317 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:15:28.317 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:15:28.317 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:15:28.317 21:47:51 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1c1n1 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:28.317 21:47:51 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:28.317 21:47:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:28.317 21:47:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:28.317 21:47:51 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:28.317 21:47:51 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:28.317 21:47:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:15:28.317 21:47:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:28.317 21:47:51 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:28.317 21:47:51 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:28.317 21:47:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:15:28.317 21:47:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:28.317 21:47:51 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:28.317 21:47:51 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:28.317 21:47:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:28.317 21:47:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:28.317 21:47:51 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:28.317 21:47:51 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:28.317 21:47:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:28.317 21:47:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:28.317 21:47:51 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:28.317 21:47:51 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:28.317 21:47:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:28.317 21:47:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:28.317 21:47:51 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:28.317 21:47:51 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:28.317 21:47:51 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:28.317 21:47:51 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:15:28.317 nvme0n1 00:15:28.317 nvme0n2 00:15:28.317 nvme0n3 00:15:28.317 nvme1n1 00:15:28.317 nvme2n1 00:15:28.317 nvme3n1 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.317 21:47:51 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.317 21:47:51 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:15:28.317 21:47:51 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.317 21:47:51 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.317 21:47:51 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.317 21:47:51 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:15:28.317 21:47:51 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:15:28.317 21:47:51 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.317 21:47:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:28.579 21:47:51 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.579 21:47:51 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:15:28.579 21:47:51 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:15:28.579 21:47:51 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "fe9c4c04-b18a-458b-85a3-493439df11c8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fe9c4c04-b18a-458b-85a3-493439df11c8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "2c02be42-4827-4612-9540-4ac5c340fe04"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2c02be42-4827-4612-9540-4ac5c340fe04",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "b403313c-b6ea-426d-8ae1-d48f3d17acdb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b403313c-b6ea-426d-8ae1-d48f3d17acdb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "1d6a35ff-650b-44c5-a9ae-a101bd9c0fa2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "1d6a35ff-650b-44c5-a9ae-a101bd9c0fa2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "5a7156de-7996-4d7f-a854-c4d66b075a46"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "5a7156de-7996-4d7f-a854-c4d66b075a46",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "20c2ba73-cc6f-4bcb-94c5-a48002f90350"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "20c2ba73-cc6f-4bcb-94c5-a48002f90350",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:28.579 21:47:51 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:15:28.579 21:47:51 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:15:28.579 21:47:51 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:15:28.579 21:47:51 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 83018 00:15:28.579 21:47:51 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 83018 ']' 00:15:28.579 21:47:51 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 83018 00:15:28.579 21:47:51 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:15:28.579 21:47:51 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:28.579 21:47:51 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83018 00:15:28.579 killing process with pid 83018 00:15:28.579 21:47:51 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:28.579 21:47:51 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:28.579 21:47:51 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83018' 00:15:28.579 21:47:51 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 83018 00:15:28.579 21:47:51 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 83018 00:15:28.840 21:47:51 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:28.840 21:47:51 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:28.840 21:47:51 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:28.840 21:47:51 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:28.840 21:47:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:28.840 ************************************ 00:15:28.840 START TEST bdev_hello_world 00:15:28.840 ************************************ 00:15:28.840 21:47:51 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:28.840 [2024-11-27 21:47:51.896805] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:15:28.840 [2024-11-27 21:47:51.896941] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83285 ] 00:15:29.101 [2024-11-27 21:47:52.045145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.101 [2024-11-27 21:47:52.074203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.363 [2024-11-27 21:47:52.298888] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:29.363 [2024-11-27 21:47:52.298956] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:29.363 [2024-11-27 21:47:52.298987] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:29.363 [2024-11-27 21:47:52.301265] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:29.363 [2024-11-27 21:47:52.301828] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:29.363 [2024-11-27 21:47:52.301860] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:29.363 [2024-11-27 21:47:52.302668] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:29.363 00:15:29.363 [2024-11-27 21:47:52.302748] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:29.625 ************************************ 00:15:29.625 END TEST bdev_hello_world 00:15:29.625 ************************************ 00:15:29.625 00:15:29.625 real 0m0.653s 00:15:29.625 user 0m0.335s 00:15:29.625 sys 0m0.173s 00:15:29.625 21:47:52 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:29.625 21:47:52 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:29.625 21:47:52 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:15:29.625 21:47:52 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:29.625 21:47:52 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:29.625 21:47:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:29.625 ************************************ 00:15:29.625 START TEST bdev_bounds 00:15:29.625 ************************************ 00:15:29.625 21:47:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:15:29.625 21:47:52 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=83311 00:15:29.625 Process bdevio pid: 83311 00:15:29.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.625 21:47:52 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:29.625 21:47:52 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 83311' 00:15:29.625 21:47:52 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 83311 00:15:29.625 21:47:52 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:29.625 21:47:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 83311 ']' 00:15:29.625 21:47:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.625 21:47:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:29.625 21:47:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.625 21:47:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:29.625 21:47:52 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:29.625 [2024-11-27 21:47:52.635943] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:15:29.625 [2024-11-27 21:47:52.636087] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83311 ] 00:15:29.887 [2024-11-27 21:47:52.782054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:29.887 [2024-11-27 21:47:52.814506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:29.887 [2024-11-27 21:47:52.814862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.887 [2024-11-27 21:47:52.814935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:30.457 21:47:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:30.457 21:47:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:15:30.457 21:47:53 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:30.718 I/O targets: 00:15:30.718 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:30.718 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:30.718 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:30.718 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:30.718 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:30.718 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:30.718 00:15:30.718 00:15:30.718 CUnit - A unit testing framework for C - Version 2.1-3 00:15:30.718 http://cunit.sourceforge.net/ 00:15:30.718 00:15:30.718 00:15:30.718 Suite: bdevio tests on: nvme3n1 00:15:30.718 Test: blockdev write read block ...passed 00:15:30.718 Test: blockdev write zeroes read block ...passed 00:15:30.718 Test: blockdev write zeroes read no split ...passed 00:15:30.718 Test: blockdev write zeroes read split ...passed 00:15:30.718 Test: blockdev write zeroes read split partial ...passed 00:15:30.718 Test: blockdev reset ...passed 00:15:30.718 Test: blockdev write read 8 blocks ...passed 00:15:30.718 Test: blockdev write read size > 128k ...passed 00:15:30.718 Test: blockdev write read invalid size ...passed 00:15:30.718 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:30.718 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:30.718 Test: blockdev write read max offset ...passed 00:15:30.718 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:30.718 Test: blockdev writev readv 8 blocks ...passed 00:15:30.718 Test: blockdev writev readv 30 x 1block ...passed 00:15:30.718 Test: blockdev writev readv block ...passed 00:15:30.718 Test: blockdev writev readv size > 128k ...passed 00:15:30.718 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:30.718 Test: blockdev comparev and writev ...passed 00:15:30.718 Test: blockdev nvme passthru rw ...passed 00:15:30.718 Test: blockdev nvme passthru vendor specific ...passed 00:15:30.718 Test: blockdev nvme admin passthru ...passed 00:15:30.718 Test: blockdev copy ...passed 00:15:30.718 Suite: bdevio tests on: nvme2n1 00:15:30.718 Test: blockdev write read block ...passed 00:15:30.718 Test: blockdev write zeroes read block ...passed 00:15:30.718 Test: blockdev write zeroes read no split ...passed 00:15:30.718 Test: blockdev write zeroes read split ...passed 00:15:30.719 Test: blockdev write zeroes read split partial ...passed 00:15:30.719 Test: blockdev reset ...passed 00:15:30.719 Test: blockdev write read 8 blocks ...passed 00:15:30.719 Test: blockdev write read size > 128k ...passed 00:15:30.719 Test: blockdev write read invalid size ...passed 00:15:30.719 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:30.719 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:30.719 Test: blockdev write read max offset ...passed 00:15:30.719 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:30.719 Test: blockdev writev readv 8 blocks ...passed 00:15:30.719 Test: blockdev writev readv 30 x 1block ...passed 00:15:30.719 Test: blockdev writev readv block ...passed 00:15:30.719 Test: blockdev writev readv size > 128k ...passed 00:15:30.719 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:30.719 Test: blockdev comparev and writev ...passed 00:15:30.719 Test: blockdev nvme passthru rw ...passed 00:15:30.719 Test: blockdev nvme passthru vendor specific ...passed 00:15:30.719 Test: blockdev nvme admin passthru ...passed 00:15:30.719 Test: blockdev copy ...passed 00:15:30.719 Suite: bdevio tests on: nvme1n1 00:15:30.719 Test: blockdev write read block ...passed 00:15:30.719 Test: blockdev write zeroes read block ...passed 00:15:30.719 Test: blockdev write zeroes read no split ...passed 00:15:30.719 Test: blockdev write zeroes read split ...passed 00:15:30.719 Test: blockdev write zeroes read split partial ...passed 00:15:30.719 Test: blockdev reset ...passed 00:15:30.719 Test: blockdev write read 8 blocks ...passed 00:15:30.719 Test: blockdev write read size > 128k ...passed 00:15:30.719 Test: blockdev write read invalid size ...passed 00:15:30.719 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:30.719 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:30.719 Test: blockdev write read max offset ...passed 00:15:30.719 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:30.719 Test: blockdev writev readv 8 blocks ...passed 00:15:30.719 Test: blockdev writev readv 30 x 1block ...passed 00:15:30.719 Test: blockdev writev readv block ...passed 00:15:30.719 Test: blockdev writev readv size > 128k ...passed 00:15:30.719 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:30.719 Test: blockdev comparev and writev ...passed 00:15:30.719 Test: blockdev nvme passthru rw ...passed 00:15:30.719 Test: blockdev nvme passthru vendor specific ...passed 00:15:30.719 Test: blockdev nvme admin passthru ...passed 00:15:30.719 Test: blockdev copy ...passed 00:15:30.719 Suite: bdevio tests on: nvme0n3 00:15:30.719 Test: blockdev write read block ...passed 00:15:30.719 Test: blockdev write zeroes read block ...passed 00:15:30.719 Test: blockdev write zeroes read no split ...passed 00:15:30.719 Test: blockdev write zeroes read split ...passed 00:15:30.719 Test: blockdev write zeroes read split partial ...passed 00:15:30.719 Test: blockdev reset ...passed 00:15:30.719 Test: blockdev write read 8 blocks ...passed 00:15:30.719 Test: blockdev write read size > 128k ...passed 00:15:30.719 Test: blockdev write read invalid size ...passed 00:15:30.719 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:30.719 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:30.719 Test: blockdev write read max offset ...passed 00:15:30.719 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:30.719 Test: blockdev writev readv 8 blocks ...passed 00:15:30.719 Test: blockdev writev readv 30 x 1block ...passed 00:15:30.719 Test: blockdev writev readv block ...passed 00:15:30.719 Test: blockdev writev readv size > 128k ...passed 00:15:30.719 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:30.719 Test: blockdev comparev and writev ...passed 00:15:30.719 Test: blockdev nvme passthru rw ...passed 00:15:30.719 Test: blockdev nvme passthru vendor specific ...passed 00:15:30.719 Test: blockdev nvme admin passthru ...passed 00:15:30.719 Test: blockdev copy ...passed 00:15:30.719 Suite: bdevio tests on: nvme0n2 00:15:30.719 Test: blockdev write read block ...passed 00:15:30.719 Test: blockdev write zeroes read block ...passed 00:15:30.719 Test: blockdev write zeroes read no split ...passed 00:15:30.719 Test: blockdev write zeroes read split ...passed 00:15:30.719 Test: blockdev write zeroes read split partial ...passed 00:15:30.719 Test: blockdev reset ...passed 00:15:30.719 Test: blockdev write read 8 blocks ...passed 00:15:30.719 Test: blockdev write read size > 128k ...passed 00:15:30.719 Test: blockdev write read invalid size ...passed 00:15:30.980 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:30.980 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:30.980 Test: blockdev write read max offset ...passed 00:15:30.980 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:30.980 Test: blockdev writev readv 8 blocks ...passed 00:15:30.980 Test: blockdev writev readv 30 x 1block ...passed 00:15:30.980 Test: blockdev writev readv block ...passed 00:15:30.980 Test: blockdev writev readv size > 128k ...passed 00:15:30.980 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:30.980 Test: blockdev comparev and writev ...passed 00:15:30.980 Test: blockdev nvme passthru rw ...passed 00:15:30.980 Test: blockdev nvme passthru vendor specific ...passed 00:15:30.980 Test: blockdev nvme admin passthru ...passed 00:15:30.980 Test: blockdev copy ...passed 00:15:30.980 Suite: bdevio tests on: nvme0n1 00:15:30.980 Test: blockdev write read block ...passed 00:15:30.980 Test: blockdev write zeroes read block ...passed 00:15:30.980 Test: blockdev write zeroes read no split ...passed 00:15:30.980 Test: blockdev write zeroes read split ...passed 00:15:30.980 Test: blockdev write zeroes read split partial ...passed 00:15:30.980 Test: blockdev reset ...passed 00:15:30.980 Test: blockdev write read 8 blocks ...passed 00:15:30.980 Test: blockdev write read size > 128k ...passed 00:15:30.980 Test: blockdev write read invalid size ...passed 00:15:30.980 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:30.980 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:30.980 Test: blockdev write read max offset ...passed 00:15:30.980 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:30.981 Test: blockdev writev readv 8 blocks ...passed 00:15:30.981 Test: blockdev writev readv 30 x 1block ...passed 00:15:30.981 Test: blockdev writev readv block ...passed 00:15:30.981 Test: blockdev writev readv size > 128k ...passed 00:15:30.981 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:30.981 Test: blockdev comparev and writev ...passed 00:15:30.981 Test: blockdev nvme passthru rw ...passed 00:15:30.981 Test: blockdev nvme passthru vendor specific ...passed 00:15:30.981 Test: blockdev nvme admin passthru ...passed 00:15:30.981 Test: blockdev copy ...passed 00:15:30.981 00:15:30.981 Run Summary: Type Total Ran Passed Failed Inactive 00:15:30.981 suites 6 6 n/a 0 0 00:15:30.981 tests 138 138 138 0 0 00:15:30.981 asserts 780 780 780 0 n/a 00:15:30.981 00:15:30.981 Elapsed time = 0.634 seconds 00:15:30.981 0 00:15:30.981 21:47:53 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 83311 00:15:30.981 21:47:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 83311 ']' 00:15:30.981 21:47:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 83311 00:15:30.981 21:47:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:15:30.981 21:47:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:30.981 21:47:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83311 00:15:30.981 21:47:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:30.981 21:47:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:30.981 21:47:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83311' 00:15:30.981 killing process with pid 83311 00:15:30.981 21:47:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 83311 00:15:30.981 21:47:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 83311 00:15:31.242 21:47:54 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:31.242 00:15:31.242 real 0m1.599s 00:15:31.242 user 0m3.915s 00:15:31.242 sys 0m0.364s 00:15:31.242 21:47:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:31.242 21:47:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:31.242 ************************************ 00:15:31.242 END TEST bdev_bounds 00:15:31.242 ************************************ 00:15:31.242 21:47:54 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:31.242 21:47:54 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:31.242 21:47:54 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:31.242 21:47:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:31.242 ************************************ 00:15:31.242 START TEST bdev_nbd 00:15:31.242 ************************************ 00:15:31.242 21:47:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:31.242 21:47:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:31.242 21:47:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:31.242 21:47:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:31.242 21:47:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:31.242 21:47:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:31.242 21:47:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:31.242 21:47:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:31.242 21:47:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:31.242 21:47:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:31.242 21:47:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:31.242 21:47:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:31.242 21:47:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:31.242 21:47:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:31.242 21:47:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:31.242 21:47:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:31.242 21:47:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=83365 00:15:31.242 21:47:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:31.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:31.242 21:47:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 83365 /var/tmp/spdk-nbd.sock 00:15:31.242 21:47:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 83365 ']' 00:15:31.242 21:47:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:31.242 21:47:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:31.242 21:47:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:31.242 21:47:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:31.242 21:47:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:31.242 21:47:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:31.242 [2024-11-27 21:47:54.314380] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:15:31.242 [2024-11-27 21:47:54.314523] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:31.504 [2024-11-27 21:47:54.463820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.504 [2024-11-27 21:47:54.493110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.076 21:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:32.076 21:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:15:32.076 21:47:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:32.076 21:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:32.076 21:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:32.077 21:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:32.077 21:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:32.077 21:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:32.077 21:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:32.077 21:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:32.077 21:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:32.077 21:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:32.077 21:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:32.077 21:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:32.077 21:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:32.337 21:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:32.338 21:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:32.338 21:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:32.338 21:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:32.338 21:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:32.338 21:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:32.338 21:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:32.338 21:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:32.338 21:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:32.338 21:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:32.338 21:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:32.338 21:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:32.338 1+0 records in 00:15:32.338 1+0 records out 00:15:32.338 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00165664 s, 2.5 MB/s 00:15:32.338 21:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.338 21:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:32.338 21:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.338 21:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:32.338 21:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:32.338 21:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:32.338 21:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:32.338 21:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:15:32.599 21:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:32.599 21:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:32.599 21:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:32.599 21:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:32.599 21:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:32.599 21:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:32.599 21:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:32.599 21:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:32.599 21:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:32.599 21:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:32.599 21:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:32.599 21:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:32.600 1+0 records in 00:15:32.600 1+0 records out 00:15:32.600 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108025 s, 3.8 MB/s 00:15:32.600 21:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.600 21:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:32.600 21:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.600 21:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:32.600 21:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:32.600 21:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:32.600 21:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:32.600 21:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:15:32.861 21:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:32.861 21:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:32.861 21:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:32.861 21:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:15:32.861 21:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:32.861 21:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:32.861 21:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:32.861 21:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:15:32.861 21:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:32.861 21:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:32.861 21:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:32.861 21:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:32.861 1+0 records in 00:15:32.861 1+0 records out 00:15:32.861 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00119138 s, 3.4 MB/s 00:15:32.861 21:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.861 21:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:32.861 21:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.861 21:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:32.861 21:47:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:32.861 21:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:32.861 21:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:32.861 21:47:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:33.123 21:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:33.123 21:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:33.123 21:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:33.123 21:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:15:33.123 21:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:33.123 21:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:33.123 21:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:33.123 21:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:15:33.123 21:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:33.123 21:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:33.123 21:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:33.123 21:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:33.123 1+0 records in 00:15:33.123 1+0 records out 00:15:33.123 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104758 s, 3.9 MB/s 00:15:33.123 21:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.123 21:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:33.123 21:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.123 21:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:33.123 21:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:33.123 21:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:33.123 21:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:33.123 21:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:33.385 21:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:33.385 21:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:33.385 21:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:33.385 21:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:15:33.385 21:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:33.385 21:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:33.385 21:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:33.385 21:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:15:33.385 21:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:33.385 21:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:33.385 21:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:33.385 21:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:33.385 1+0 records in 00:15:33.385 1+0 records out 00:15:33.385 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00138357 s, 3.0 MB/s 00:15:33.385 21:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.385 21:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:33.385 21:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.385 21:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:33.385 21:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:33.385 21:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:33.385 21:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:33.385 21:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:33.647 21:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:33.647 21:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:33.647 21:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:33.647 21:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:15:33.647 21:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:33.647 21:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:33.647 21:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:33.647 21:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:15:33.647 21:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:33.647 21:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:33.647 21:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:33.647 21:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:33.647 1+0 records in 00:15:33.647 1+0 records out 00:15:33.647 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00153241 s, 2.7 MB/s 00:15:33.647 21:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.647 21:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:33.647 21:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.647 21:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:33.647 21:47:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:33.647 21:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:33.647 21:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:33.647 21:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:33.910 21:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:33.910 { 00:15:33.910 "nbd_device": "/dev/nbd0", 00:15:33.910 "bdev_name": "nvme0n1" 00:15:33.910 }, 00:15:33.910 { 00:15:33.910 "nbd_device": "/dev/nbd1", 00:15:33.910 "bdev_name": "nvme0n2" 00:15:33.910 }, 00:15:33.910 { 00:15:33.910 "nbd_device": "/dev/nbd2", 00:15:33.910 "bdev_name": "nvme0n3" 00:15:33.910 }, 00:15:33.910 { 00:15:33.910 "nbd_device": "/dev/nbd3", 00:15:33.910 "bdev_name": "nvme1n1" 00:15:33.910 }, 00:15:33.910 { 00:15:33.910 "nbd_device": "/dev/nbd4", 00:15:33.910 "bdev_name": "nvme2n1" 00:15:33.910 }, 00:15:33.910 { 00:15:33.910 "nbd_device": "/dev/nbd5", 00:15:33.910 "bdev_name": "nvme3n1" 00:15:33.910 } 00:15:33.910 ]' 00:15:33.910 21:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:33.910 21:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:33.910 { 00:15:33.910 "nbd_device": "/dev/nbd0", 00:15:33.910 "bdev_name": "nvme0n1" 00:15:33.910 }, 00:15:33.910 { 00:15:33.910 "nbd_device": "/dev/nbd1", 00:15:33.910 "bdev_name": "nvme0n2" 00:15:33.910 }, 00:15:33.910 { 00:15:33.910 "nbd_device": "/dev/nbd2", 00:15:33.910 "bdev_name": "nvme0n3" 00:15:33.910 }, 00:15:33.910 { 00:15:33.910 "nbd_device": "/dev/nbd3", 00:15:33.910 "bdev_name": "nvme1n1" 00:15:33.910 }, 00:15:33.910 { 00:15:33.910 "nbd_device": "/dev/nbd4", 00:15:33.910 "bdev_name": "nvme2n1" 00:15:33.910 }, 00:15:33.910 { 00:15:33.910 "nbd_device": "/dev/nbd5", 00:15:33.910 "bdev_name": "nvme3n1" 00:15:33.910 } 00:15:33.910 ]' 00:15:33.910 21:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:33.910 21:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:33.910 21:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:33.910 21:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:33.910 21:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:33.910 21:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:33.910 21:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:33.910 21:47:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:34.171 21:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:34.171 21:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:34.171 21:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:34.171 21:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:34.171 21:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:34.171 21:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:34.171 21:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:34.171 21:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:34.171 21:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:34.171 21:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:34.433 21:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:34.433 21:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:34.433 21:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:34.433 21:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:34.433 21:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:34.433 21:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:34.433 21:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:34.433 21:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:34.433 21:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:34.433 21:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:34.695 21:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:34.695 21:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:34.695 21:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:34.695 21:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:34.695 21:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:34.695 21:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:34.695 21:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:34.695 21:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:34.695 21:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:34.695 21:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:34.957 21:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:34.957 21:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:34.957 21:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:34.957 21:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:34.957 21:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:34.957 21:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:34.957 21:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:34.957 21:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:34.957 21:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:34.957 21:47:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:35.217 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:35.217 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:35.217 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:35.217 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:35.217 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:35.217 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:35.217 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:35.217 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:35.217 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:35.218 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:35.218 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:35.218 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:35.218 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:35.218 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:35.218 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:35.218 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:35.218 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:35.218 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:35.218 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:35.218 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:35.218 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:35.478 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:35.478 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:35.478 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:35.478 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:35.478 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:35.478 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:35.478 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:35.478 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:35.478 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:35.478 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:35.478 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:35.478 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:35.478 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:35.478 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:35.478 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:35.478 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:35.478 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:35.478 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:35.478 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:35.478 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:35.478 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:35.478 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:35.478 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:35.478 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:35.478 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:35.478 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:35.478 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:35.478 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:35.741 /dev/nbd0 00:15:35.741 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:35.741 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:35.741 21:47:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:35.741 21:47:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:35.741 21:47:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:35.741 21:47:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:35.741 21:47:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:35.741 21:47:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:35.741 21:47:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:35.741 21:47:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:35.741 21:47:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:35.741 1+0 records in 00:15:35.741 1+0 records out 00:15:35.741 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00133085 s, 3.1 MB/s 00:15:35.741 21:47:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:35.741 21:47:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:35.741 21:47:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:35.741 21:47:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:35.741 21:47:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:35.741 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:35.741 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:35.741 21:47:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:15:36.003 /dev/nbd1 00:15:36.003 21:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:36.003 21:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:36.003 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:36.003 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:36.003 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:36.003 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:36.003 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:36.003 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:36.003 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:36.003 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:36.003 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:36.003 1+0 records in 00:15:36.003 1+0 records out 00:15:36.003 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00121874 s, 3.4 MB/s 00:15:36.003 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.003 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:36.003 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.003 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:36.003 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:36.003 21:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:36.003 21:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:36.003 21:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:15:36.264 /dev/nbd10 00:15:36.264 21:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:36.264 21:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:36.264 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:15:36.264 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:36.264 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:36.264 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:36.264 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:15:36.264 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:36.264 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:36.264 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:36.264 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:36.264 1+0 records in 00:15:36.264 1+0 records out 00:15:36.264 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000895702 s, 4.6 MB/s 00:15:36.264 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.264 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:36.264 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.264 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:36.264 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:36.264 21:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:36.264 21:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:36.264 21:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:15:36.526 /dev/nbd11 00:15:36.526 21:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:36.526 21:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:36.526 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:15:36.526 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:36.526 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:36.526 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:36.526 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:15:36.526 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:36.526 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:36.526 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:36.526 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:36.526 1+0 records in 00:15:36.526 1+0 records out 00:15:36.526 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000981767 s, 4.2 MB/s 00:15:36.526 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.526 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:36.526 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.526 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:36.526 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:36.526 21:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:36.526 21:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:36.526 21:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:15:36.788 /dev/nbd12 00:15:36.788 21:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:36.788 21:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:36.788 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:15:36.788 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:36.788 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:36.788 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:36.788 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:15:36.788 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:36.788 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:36.788 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:36.788 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:36.788 1+0 records in 00:15:36.788 1+0 records out 00:15:36.788 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00138198 s, 3.0 MB/s 00:15:36.788 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.788 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:36.788 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.788 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:36.788 21:47:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:36.788 21:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:36.788 21:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:36.788 21:47:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:15:37.050 /dev/nbd13 00:15:37.050 21:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:37.050 21:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:37.050 21:48:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:15:37.050 21:48:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:37.050 21:48:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:37.050 21:48:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:37.050 21:48:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:15:37.050 21:48:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:37.050 21:48:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:37.050 21:48:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:37.050 21:48:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:37.050 1+0 records in 00:15:37.050 1+0 records out 00:15:37.050 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100283 s, 4.1 MB/s 00:15:37.050 21:48:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.050 21:48:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:37.050 21:48:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.050 21:48:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:37.050 21:48:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:37.050 21:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:37.050 21:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:37.050 21:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:37.050 21:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:37.050 21:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:37.313 21:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:37.313 { 00:15:37.313 "nbd_device": "/dev/nbd0", 00:15:37.313 "bdev_name": "nvme0n1" 00:15:37.313 }, 00:15:37.313 { 00:15:37.313 "nbd_device": "/dev/nbd1", 00:15:37.313 "bdev_name": "nvme0n2" 00:15:37.313 }, 00:15:37.313 { 00:15:37.313 "nbd_device": "/dev/nbd10", 00:15:37.313 "bdev_name": "nvme0n3" 00:15:37.313 }, 00:15:37.313 { 00:15:37.313 "nbd_device": "/dev/nbd11", 00:15:37.313 "bdev_name": "nvme1n1" 00:15:37.313 }, 00:15:37.313 { 00:15:37.313 "nbd_device": "/dev/nbd12", 00:15:37.313 "bdev_name": "nvme2n1" 00:15:37.313 }, 00:15:37.313 { 00:15:37.313 "nbd_device": "/dev/nbd13", 00:15:37.313 "bdev_name": "nvme3n1" 00:15:37.313 } 00:15:37.313 ]' 00:15:37.313 21:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:37.313 21:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:37.313 { 00:15:37.313 "nbd_device": "/dev/nbd0", 00:15:37.313 "bdev_name": "nvme0n1" 00:15:37.313 }, 00:15:37.313 { 00:15:37.313 "nbd_device": "/dev/nbd1", 00:15:37.313 "bdev_name": "nvme0n2" 00:15:37.313 }, 00:15:37.313 { 00:15:37.313 "nbd_device": "/dev/nbd10", 00:15:37.313 "bdev_name": "nvme0n3" 00:15:37.313 }, 00:15:37.313 { 00:15:37.314 "nbd_device": "/dev/nbd11", 00:15:37.314 "bdev_name": "nvme1n1" 00:15:37.314 }, 00:15:37.314 { 00:15:37.314 "nbd_device": "/dev/nbd12", 00:15:37.314 "bdev_name": "nvme2n1" 00:15:37.314 }, 00:15:37.314 { 00:15:37.314 "nbd_device": "/dev/nbd13", 00:15:37.314 "bdev_name": "nvme3n1" 00:15:37.314 } 00:15:37.314 ]' 00:15:37.314 21:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:37.314 /dev/nbd1 00:15:37.314 /dev/nbd10 00:15:37.314 /dev/nbd11 00:15:37.314 /dev/nbd12 00:15:37.314 /dev/nbd13' 00:15:37.314 21:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:37.314 /dev/nbd1 00:15:37.314 /dev/nbd10 00:15:37.314 /dev/nbd11 00:15:37.314 /dev/nbd12 00:15:37.314 /dev/nbd13' 00:15:37.314 21:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:37.314 21:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:15:37.314 21:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:15:37.314 21:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:15:37.314 21:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:37.314 21:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:37.314 21:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:37.314 21:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:37.314 21:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:37.314 21:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:37.314 21:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:37.314 21:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:37.314 256+0 records in 00:15:37.314 256+0 records out 00:15:37.314 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00635443 s, 165 MB/s 00:15:37.314 21:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:37.314 21:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:37.577 256+0 records in 00:15:37.577 256+0 records out 00:15:37.577 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.236249 s, 4.4 MB/s 00:15:37.577 21:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:37.577 21:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:37.839 256+0 records in 00:15:37.839 256+0 records out 00:15:37.839 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.244485 s, 4.3 MB/s 00:15:37.839 21:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:37.839 21:48:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:38.100 256+0 records in 00:15:38.100 256+0 records out 00:15:38.101 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.202762 s, 5.2 MB/s 00:15:38.101 21:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:38.101 21:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:38.362 256+0 records in 00:15:38.362 256+0 records out 00:15:38.362 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.242247 s, 4.3 MB/s 00:15:38.362 21:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:38.362 21:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:38.623 256+0 records in 00:15:38.623 256+0 records out 00:15:38.623 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.274505 s, 3.8 MB/s 00:15:38.623 21:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:38.623 21:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:38.884 256+0 records in 00:15:38.884 256+0 records out 00:15:38.884 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.197198 s, 5.3 MB/s 00:15:38.884 21:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:38.884 21:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:38.884 21:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:38.884 21:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:38.884 21:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:38.884 21:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:38.884 21:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:38.884 21:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:38.884 21:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:38.884 21:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:38.884 21:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:38.884 21:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:38.884 21:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:38.884 21:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:38.884 21:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:38.884 21:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:38.884 21:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:38.884 21:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:38.884 21:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:38.884 21:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:38.884 21:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:38.884 21:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:38.885 21:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:38.885 21:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:38.885 21:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:38.885 21:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:38.885 21:48:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:39.146 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:39.146 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:39.146 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:39.146 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:39.146 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:39.146 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:39.146 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:39.146 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:39.146 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:39.146 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:39.429 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:39.429 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:39.429 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:39.429 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:39.429 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:39.429 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:39.429 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:39.429 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:39.429 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:39.429 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:39.691 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:39.691 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:39.691 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:39.691 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:39.691 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:39.691 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:39.691 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:39.691 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:39.691 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:39.691 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:39.691 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:39.691 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:39.691 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:39.691 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:39.691 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:39.691 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:39.691 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:39.691 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:39.691 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:39.691 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:39.950 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:39.950 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:39.950 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:39.950 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:39.950 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:39.951 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:39.951 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:39.951 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:39.951 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:39.951 21:48:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:40.209 21:48:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:40.209 21:48:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:40.209 21:48:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:40.209 21:48:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:40.209 21:48:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:40.209 21:48:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:40.209 21:48:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:40.209 21:48:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:40.209 21:48:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:40.209 21:48:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:40.209 21:48:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:40.467 21:48:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:40.467 21:48:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:40.467 21:48:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:40.467 21:48:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:40.467 21:48:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:40.467 21:48:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:40.467 21:48:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:40.467 21:48:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:40.467 21:48:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:40.467 21:48:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:40.467 21:48:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:40.467 21:48:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:40.467 21:48:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:40.467 21:48:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:40.467 21:48:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:15:40.467 21:48:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:40.726 malloc_lvol_verify 00:15:40.726 21:48:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:40.726 cdfc0cb1-5530-4e9c-9cf8-f234e82b4a85 00:15:40.984 21:48:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:40.984 5ec036dd-9384-4809-a41b-4ee93161e449 00:15:40.984 21:48:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:41.242 /dev/nbd0 00:15:41.242 21:48:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:15:41.242 21:48:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:15:41.242 21:48:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:15:41.242 21:48:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:15:41.242 21:48:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:15:41.242 mke2fs 1.47.0 (5-Feb-2023) 00:15:41.242 Discarding device blocks: 0/4096 done 00:15:41.242 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:41.242 00:15:41.242 Allocating group tables: 0/1 done 00:15:41.242 Writing inode tables: 0/1 done 00:15:41.242 Creating journal (1024 blocks): done 00:15:41.242 Writing superblocks and filesystem accounting information: 0/1 done 00:15:41.242 00:15:41.242 21:48:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:41.242 21:48:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:41.242 21:48:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:41.242 21:48:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:41.242 21:48:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:41.242 21:48:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:41.242 21:48:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:41.502 21:48:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:41.502 21:48:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:41.502 21:48:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:41.502 21:48:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:41.502 21:48:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:41.502 21:48:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:41.502 21:48:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:41.502 21:48:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:41.502 21:48:04 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 83365 00:15:41.502 21:48:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 83365 ']' 00:15:41.502 21:48:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 83365 00:15:41.502 21:48:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:15:41.502 21:48:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:41.502 21:48:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83365 00:15:41.502 killing process with pid 83365 00:15:41.502 21:48:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:41.502 21:48:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:41.502 21:48:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83365' 00:15:41.502 21:48:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 83365 00:15:41.502 21:48:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 83365 00:15:41.764 21:48:04 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:15:41.764 00:15:41.764 real 0m10.399s 00:15:41.764 user 0m14.182s 00:15:41.764 sys 0m3.822s 00:15:41.764 21:48:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:41.764 21:48:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:41.764 ************************************ 00:15:41.764 END TEST bdev_nbd 00:15:41.764 ************************************ 00:15:41.764 21:48:04 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:15:41.764 21:48:04 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:15:41.764 21:48:04 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:15:41.764 21:48:04 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:15:41.764 21:48:04 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:41.764 21:48:04 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:41.764 21:48:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:41.764 ************************************ 00:15:41.764 START TEST bdev_fio 00:15:41.764 ************************************ 00:15:41.764 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:41.764 21:48:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:15:41.764 21:48:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:15:41.764 21:48:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:41.764 21:48:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:41.764 21:48:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:41.765 ************************************ 00:15:41.765 START TEST bdev_fio_rw_verify 00:15:41.765 ************************************ 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:41.765 21:48:04 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:42.028 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:42.028 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:42.028 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:42.028 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:42.028 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:42.028 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:42.028 fio-3.35 00:15:42.028 Starting 6 threads 00:15:54.324 00:15:54.324 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=83759: Wed Nov 27 21:48:15 2024 00:15:54.324 read: IOPS=16.5k, BW=64.6MiB/s (67.7MB/s)(646MiB/10002msec) 00:15:54.324 slat (usec): min=2, max=2574, avg= 6.77, stdev=18.24 00:15:54.324 clat (usec): min=73, max=446888, avg=1149.79, stdev=3185.68 00:15:54.324 lat (usec): min=76, max=446893, avg=1156.56, stdev=3185.88 00:15:54.324 clat percentiles (usec): 00:15:54.324 | 50.000th=[ 1004], 99.000th=[ 3458], 99.900th=[ 4883], 00:15:54.324 | 99.990th=[ 6194], 99.999th=[446694] 00:15:54.324 write: IOPS=17.0k, BW=66.2MiB/s (69.4MB/s)(662MiB/10002msec); 0 zone resets 00:15:54.324 slat (usec): min=10, max=3714, avg=40.99, stdev=134.25 00:15:54.324 clat (usec): min=79, max=7762, avg=1386.78, stdev=820.69 00:15:54.324 lat (usec): min=101, max=7863, avg=1427.77, stdev=833.84 00:15:54.324 clat percentiles (usec): 00:15:54.324 | 50.000th=[ 1254], 99.000th=[ 3916], 99.900th=[ 5473], 99.990th=[ 6980], 00:15:54.324 | 99.999th=[ 7767] 00:15:54.324 bw ( KiB/s): min=46947, max=103071, per=100.00%, avg=68443.42, stdev=2805.96, samples=114 00:15:54.324 iops : min=11733, max=25767, avg=17109.53, stdev=701.59, samples=114 00:15:54.324 lat (usec) : 100=0.01%, 250=4.90%, 500=11.61%, 750=13.50%, 1000=13.12% 00:15:54.324 lat (msec) : 2=41.31%, 4=14.94%, 10=0.60%, 500=0.01% 00:15:54.324 cpu : usr=41.15%, sys=33.78%, ctx=5927, majf=0, minf=18001 00:15:54.324 IO depths : 1=11.3%, 2=23.8%, 4=51.2%, 8=13.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:54.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:54.324 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:54.324 issued rwts: total=165408,169544,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:54.324 latency : target=0, window=0, percentile=100.00%, depth=8 00:15:54.324 00:15:54.324 Run status group 0 (all jobs): 00:15:54.324 READ: bw=64.6MiB/s (67.7MB/s), 64.6MiB/s-64.6MiB/s (67.7MB/s-67.7MB/s), io=646MiB (678MB), run=10002-10002msec 00:15:54.324 WRITE: bw=66.2MiB/s (69.4MB/s), 66.2MiB/s-66.2MiB/s (69.4MB/s-69.4MB/s), io=662MiB (694MB), run=10002-10002msec 00:15:54.324 ----------------------------------------------------- 00:15:54.324 Suppressions used: 00:15:54.324 count bytes template 00:15:54.324 6 48 /usr/src/fio/parse.c 00:15:54.324 4030 386880 /usr/src/fio/iolog.c 00:15:54.324 1 8 libtcmalloc_minimal.so 00:15:54.324 1 904 libcrypto.so 00:15:54.324 ----------------------------------------------------- 00:15:54.324 00:15:54.324 00:15:54.324 real 0m11.168s 00:15:54.324 user 0m25.411s 00:15:54.324 sys 0m20.606s 00:15:54.324 21:48:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:54.324 21:48:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:15:54.324 ************************************ 00:15:54.324 END TEST bdev_fio_rw_verify 00:15:54.324 ************************************ 00:15:54.324 21:48:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:15:54.324 21:48:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:54.324 21:48:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:15:54.324 21:48:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:54.324 21:48:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:15:54.324 21:48:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:15:54.324 21:48:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:54.324 21:48:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:54.324 21:48:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:54.324 21:48:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:15:54.324 21:48:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:54.324 21:48:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:54.324 21:48:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:54.325 21:48:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:15:54.325 21:48:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:15:54.325 21:48:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:15:54.325 21:48:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:15:54.325 21:48:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "fe9c4c04-b18a-458b-85a3-493439df11c8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fe9c4c04-b18a-458b-85a3-493439df11c8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "2c02be42-4827-4612-9540-4ac5c340fe04"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2c02be42-4827-4612-9540-4ac5c340fe04",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "b403313c-b6ea-426d-8ae1-d48f3d17acdb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b403313c-b6ea-426d-8ae1-d48f3d17acdb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "1d6a35ff-650b-44c5-a9ae-a101bd9c0fa2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "1d6a35ff-650b-44c5-a9ae-a101bd9c0fa2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "5a7156de-7996-4d7f-a854-c4d66b075a46"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "5a7156de-7996-4d7f-a854-c4d66b075a46",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "20c2ba73-cc6f-4bcb-94c5-a48002f90350"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "20c2ba73-cc6f-4bcb-94c5-a48002f90350",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:54.325 21:48:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:15:54.325 21:48:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:54.325 /home/vagrant/spdk_repo/spdk 00:15:54.325 21:48:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:15:54.325 21:48:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:15:54.325 21:48:16 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:15:54.325 00:15:54.325 real 0m11.335s 00:15:54.325 user 0m25.492s 00:15:54.325 sys 0m20.674s 00:15:54.325 ************************************ 00:15:54.325 END TEST bdev_fio 00:15:54.325 ************************************ 00:15:54.325 21:48:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:54.325 21:48:16 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:54.325 21:48:16 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:54.325 21:48:16 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:54.325 21:48:16 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:54.325 21:48:16 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:54.325 21:48:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:54.325 ************************************ 00:15:54.325 START TEST bdev_verify 00:15:54.325 ************************************ 00:15:54.325 21:48:16 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:54.325 [2024-11-27 21:48:16.156243] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:15:54.325 [2024-11-27 21:48:16.156399] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83930 ] 00:15:54.325 [2024-11-27 21:48:16.301664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:54.325 [2024-11-27 21:48:16.331650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:54.325 [2024-11-27 21:48:16.331707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.325 Running I/O for 5 seconds... 00:15:55.842 24736.00 IOPS, 96.62 MiB/s [2024-11-27T21:48:19.906Z] 25072.00 IOPS, 97.94 MiB/s [2024-11-27T21:48:20.851Z] 24629.33 IOPS, 96.21 MiB/s [2024-11-27T21:48:21.795Z] 23992.00 IOPS, 93.72 MiB/s [2024-11-27T21:48:21.795Z] 23936.00 IOPS, 93.50 MiB/s 00:15:58.674 Latency(us) 00:15:58.674 [2024-11-27T21:48:21.795Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:58.674 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:58.674 Verification LBA range: start 0x0 length 0x80000 00:15:58.674 nvme0n1 : 5.03 1908.76 7.46 0.00 0.00 66937.42 6704.84 66544.25 00:15:58.674 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:58.674 Verification LBA range: start 0x80000 length 0x80000 00:15:58.674 nvme0n1 : 5.06 1923.81 7.51 0.00 0.00 66422.82 4234.63 69367.34 00:15:58.674 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:58.674 Verification LBA range: start 0x0 length 0x80000 00:15:58.674 nvme0n2 : 5.03 1907.95 7.45 0.00 0.00 66852.22 14518.74 59284.87 00:15:58.674 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:58.674 Verification LBA range: start 0x80000 length 0x80000 00:15:58.674 nvme0n2 : 5.07 1919.49 7.50 0.00 0.00 66467.79 10838.65 60494.77 00:15:58.674 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:58.674 Verification LBA range: start 0x0 length 0x80000 00:15:58.674 nvme0n3 : 5.07 1917.84 7.49 0.00 0.00 66386.94 11947.72 70577.23 00:15:58.674 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:58.674 Verification LBA range: start 0x80000 length 0x80000 00:15:58.674 nvme0n3 : 5.06 1922.84 7.51 0.00 0.00 66227.63 6452.78 64527.75 00:15:58.674 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:58.674 Verification LBA range: start 0x0 length 0x20000 00:15:58.674 nvme1n1 : 5.05 1901.07 7.43 0.00 0.00 66850.17 11342.77 66544.25 00:15:58.674 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:58.674 Verification LBA range: start 0x20000 length 0x20000 00:15:58.674 nvme1n1 : 5.03 1932.74 7.55 0.00 0.00 65770.06 6956.90 74206.92 00:15:58.674 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:58.674 Verification LBA range: start 0x0 length 0xbd0bd 00:15:58.674 nvme2n1 : 5.08 2485.83 9.71 0.00 0.00 51018.29 6553.60 52832.10 00:15:58.674 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:58.674 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:15:58.674 nvme2n1 : 5.09 2522.74 9.85 0.00 0.00 50289.16 6301.54 58478.28 00:15:58.674 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:58.674 Verification LBA range: start 0x0 length 0xa0000 00:15:58.674 nvme3n1 : 5.12 1825.79 7.13 0.00 0.00 69233.57 2054.30 79449.80 00:15:58.674 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:58.674 Verification LBA range: start 0xa0000 length 0xa0000 00:15:58.674 nvme3n1 : 5.12 1575.66 6.15 0.00 0.00 80441.68 2545.82 106470.79 00:15:58.674 [2024-11-27T21:48:21.795Z] =================================================================================================================== 00:15:58.674 [2024-11-27T21:48:21.795Z] Total : 23744.52 92.75 0.00 0.00 64283.32 2054.30 106470.79 00:15:58.936 00:15:58.936 real 0m5.939s 00:15:58.936 user 0m9.533s 00:15:58.936 sys 0m1.505s 00:15:58.936 21:48:22 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:58.936 ************************************ 00:15:58.936 21:48:22 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:15:58.936 END TEST bdev_verify 00:15:58.936 ************************************ 00:15:59.196 21:48:22 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:59.197 21:48:22 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:59.197 21:48:22 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:59.197 21:48:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:59.197 ************************************ 00:15:59.197 START TEST bdev_verify_big_io 00:15:59.197 ************************************ 00:15:59.197 21:48:22 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:59.197 [2024-11-27 21:48:22.175170] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:15:59.197 [2024-11-27 21:48:22.175497] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84022 ] 00:15:59.458 [2024-11-27 21:48:22.322266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:59.458 [2024-11-27 21:48:22.362202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:59.458 [2024-11-27 21:48:22.362272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.720 Running I/O for 5 seconds... 00:16:05.845 2320.00 IOPS, 145.00 MiB/s [2024-11-27T21:48:29.227Z] 3086.00 IOPS, 192.88 MiB/s 00:16:06.106 Latency(us) 00:16:06.106 [2024-11-27T21:48:29.227Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.106 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:06.106 Verification LBA range: start 0x0 length 0x8000 00:16:06.106 nvme0n1 : 5.82 142.94 8.93 0.00 0.00 865786.64 6049.48 1064707.94 00:16:06.106 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:06.106 Verification LBA range: start 0x8000 length 0x8000 00:16:06.106 nvme0n1 : 5.87 76.33 4.77 0.00 0.00 1577572.15 335544.32 1664816.05 00:16:06.106 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:06.106 Verification LBA range: start 0x0 length 0x8000 00:16:06.106 nvme0n2 : 5.82 126.40 7.90 0.00 0.00 948681.55 93968.54 1400252.26 00:16:06.106 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:06.106 Verification LBA range: start 0x8000 length 0x8000 00:16:06.106 nvme0n2 : 6.20 80.06 5.00 0.00 0.00 1425533.34 5772.21 1400252.26 00:16:06.106 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:06.106 Verification LBA range: start 0x0 length 0x8000 00:16:06.106 nvme0n3 : 5.89 147.97 9.25 0.00 0.00 795460.43 144380.85 719484.46 00:16:06.106 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:06.106 Verification LBA range: start 0x8000 length 0x8000 00:16:06.106 nvme0n3 : 6.04 84.71 5.29 0.00 0.00 1313902.28 85902.57 1109877.37 00:16:06.106 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:06.106 Verification LBA range: start 0x0 length 0x2000 00:16:06.106 nvme1n1 : 5.91 165.22 10.33 0.00 0.00 700100.92 79046.50 745295.56 00:16:06.106 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:06.106 Verification LBA range: start 0x2000 length 0x2000 00:16:06.106 nvme1n1 : 6.14 69.03 4.31 0.00 0.00 1519890.18 59688.17 1464780.01 00:16:06.106 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:06.106 Verification LBA range: start 0x0 length 0xbd0b 00:16:06.106 nvme2n1 : 5.90 162.81 10.18 0.00 0.00 684314.31 117763.15 1535760.54 00:16:06.106 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:06.106 Verification LBA range: start 0xbd0b length 0xbd0b 00:16:06.106 nvme2n1 : 6.25 179.24 11.20 0.00 0.00 567075.40 4234.63 1142141.24 00:16:06.106 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:06.106 Verification LBA range: start 0x0 length 0xa000 00:16:06.106 nvme3n1 : 5.92 148.72 9.29 0.00 0.00 737635.23 3906.95 1464780.01 00:16:06.106 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:06.106 Verification LBA range: start 0xa000 length 0xa000 00:16:06.106 nvme3n1 : 6.41 214.74 13.42 0.00 0.00 451722.20 696.32 2387526.89 00:16:06.106 [2024-11-27T21:48:29.227Z] =================================================================================================================== 00:16:06.106 [2024-11-27T21:48:29.227Z] Total : 1598.15 99.88 0.00 0.00 839828.42 696.32 2387526.89 00:16:06.367 00:16:06.367 real 0m7.335s 00:16:06.367 user 0m13.445s 00:16:06.367 sys 0m0.494s 00:16:06.367 ************************************ 00:16:06.367 END TEST bdev_verify_big_io 00:16:06.367 ************************************ 00:16:06.367 21:48:29 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:06.367 21:48:29 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:06.628 21:48:29 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:06.628 21:48:29 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:06.628 21:48:29 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:06.628 21:48:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:06.628 ************************************ 00:16:06.628 START TEST bdev_write_zeroes 00:16:06.628 ************************************ 00:16:06.628 21:48:29 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:06.628 [2024-11-27 21:48:29.577847] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:16:06.628 [2024-11-27 21:48:29.577971] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84129 ] 00:16:06.628 [2024-11-27 21:48:29.725043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.889 [2024-11-27 21:48:29.763084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.151 Running I/O for 1 seconds... 00:16:08.096 68765.00 IOPS, 268.61 MiB/s 00:16:08.096 Latency(us) 00:16:08.096 [2024-11-27T21:48:31.217Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:08.096 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:08.096 nvme0n1 : 1.02 11261.68 43.99 0.00 0.00 11354.03 6856.07 18753.38 00:16:08.096 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:08.096 nvme0n2 : 1.01 11301.93 44.15 0.00 0.00 11301.86 7057.72 20064.10 00:16:08.096 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:08.096 nvme0n3 : 1.02 11241.37 43.91 0.00 0.00 11352.33 7410.61 20669.05 00:16:08.096 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:08.096 nvme1n1 : 1.03 11228.68 43.86 0.00 0.00 11354.64 6125.10 21677.29 00:16:08.096 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:08.096 nvme2n1 : 1.02 12307.47 48.08 0.00 0.00 10349.33 4335.46 19559.98 00:16:08.097 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:08.097 nvme3n1 : 1.03 11215.84 43.81 0.00 0.00 11266.07 3831.34 20769.87 00:16:08.097 [2024-11-27T21:48:31.218Z] =================================================================================================================== 00:16:08.097 [2024-11-27T21:48:31.218Z] Total : 68556.97 267.80 0.00 0.00 11150.56 3831.34 21677.29 00:16:08.357 00:16:08.357 real 0m1.857s 00:16:08.357 user 0m1.139s 00:16:08.357 sys 0m0.528s 00:16:08.357 21:48:31 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:08.357 21:48:31 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:08.357 ************************************ 00:16:08.357 END TEST bdev_write_zeroes 00:16:08.357 ************************************ 00:16:08.357 21:48:31 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:08.357 21:48:31 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:08.357 21:48:31 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:08.357 21:48:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:08.357 ************************************ 00:16:08.357 START TEST bdev_json_nonenclosed 00:16:08.357 ************************************ 00:16:08.357 21:48:31 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:08.619 [2024-11-27 21:48:31.521075] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:16:08.619 [2024-11-27 21:48:31.521577] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84166 ] 00:16:08.619 [2024-11-27 21:48:31.667523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.619 [2024-11-27 21:48:31.709121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.619 [2024-11-27 21:48:31.709254] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:08.619 [2024-11-27 21:48:31.709279] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:08.619 [2024-11-27 21:48:31.709295] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:08.880 ************************************ 00:16:08.880 END TEST bdev_json_nonenclosed 00:16:08.880 00:16:08.880 real 0m0.356s 00:16:08.880 user 0m0.137s 00:16:08.880 sys 0m0.113s 00:16:08.880 21:48:31 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:08.880 21:48:31 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:08.880 ************************************ 00:16:08.880 21:48:31 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:08.880 21:48:31 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:08.880 21:48:31 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:08.880 21:48:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:08.880 ************************************ 00:16:08.880 START TEST bdev_json_nonarray 00:16:08.880 ************************************ 00:16:08.880 21:48:31 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:08.880 [2024-11-27 21:48:31.935884] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:16:08.880 [2024-11-27 21:48:31.936027] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84191 ] 00:16:09.141 [2024-11-27 21:48:32.084375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.141 [2024-11-27 21:48:32.122610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.141 [2024-11-27 21:48:32.123069] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:09.141 [2024-11-27 21:48:32.123099] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:09.141 [2024-11-27 21:48:32.123124] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:09.141 ************************************ 00:16:09.141 END TEST bdev_json_nonarray 00:16:09.142 ************************************ 00:16:09.142 00:16:09.142 real 0m0.347s 00:16:09.142 user 0m0.133s 00:16:09.142 sys 0m0.109s 00:16:09.142 21:48:32 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:09.142 21:48:32 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:09.142 21:48:32 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:16:09.142 21:48:32 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:16:09.142 21:48:32 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:16:09.142 21:48:32 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:16:09.142 21:48:32 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:16:09.142 21:48:32 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:09.403 21:48:32 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:09.403 21:48:32 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:16:09.403 21:48:32 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:16:09.403 21:48:32 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:16:09.403 21:48:32 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:16:09.403 21:48:32 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:09.664 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:17.815 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:17.815 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:17.815 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:20.362 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:20.362 00:16:20.362 real 0m53.866s 00:16:20.362 user 1m11.992s 00:16:20.362 sys 0m51.821s 00:16:20.362 21:48:42 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:20.362 21:48:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:20.362 ************************************ 00:16:20.362 END TEST blockdev_xnvme 00:16:20.362 ************************************ 00:16:20.362 21:48:42 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:20.362 21:48:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:20.362 21:48:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:20.362 21:48:42 -- common/autotest_common.sh@10 -- # set +x 00:16:20.362 ************************************ 00:16:20.362 START TEST ublk 00:16:20.362 ************************************ 00:16:20.362 21:48:43 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:20.362 * Looking for test storage... 00:16:20.362 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:20.362 21:48:43 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:20.362 21:48:43 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:20.362 21:48:43 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:16:20.362 21:48:43 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:20.362 21:48:43 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:20.362 21:48:43 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:20.362 21:48:43 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:20.362 21:48:43 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:16:20.362 21:48:43 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:16:20.362 21:48:43 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:16:20.362 21:48:43 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:16:20.362 21:48:43 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:16:20.362 21:48:43 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:16:20.362 21:48:43 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:16:20.362 21:48:43 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:20.362 21:48:43 ublk -- scripts/common.sh@344 -- # case "$op" in 00:16:20.362 21:48:43 ublk -- scripts/common.sh@345 -- # : 1 00:16:20.362 21:48:43 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:20.362 21:48:43 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:20.362 21:48:43 ublk -- scripts/common.sh@365 -- # decimal 1 00:16:20.362 21:48:43 ublk -- scripts/common.sh@353 -- # local d=1 00:16:20.362 21:48:43 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:20.362 21:48:43 ublk -- scripts/common.sh@355 -- # echo 1 00:16:20.362 21:48:43 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:16:20.362 21:48:43 ublk -- scripts/common.sh@366 -- # decimal 2 00:16:20.362 21:48:43 ublk -- scripts/common.sh@353 -- # local d=2 00:16:20.362 21:48:43 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:20.362 21:48:43 ublk -- scripts/common.sh@355 -- # echo 2 00:16:20.362 21:48:43 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:16:20.362 21:48:43 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:20.362 21:48:43 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:20.362 21:48:43 ublk -- scripts/common.sh@368 -- # return 0 00:16:20.362 21:48:43 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:20.363 21:48:43 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:20.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.363 --rc genhtml_branch_coverage=1 00:16:20.363 --rc genhtml_function_coverage=1 00:16:20.363 --rc genhtml_legend=1 00:16:20.363 --rc geninfo_all_blocks=1 00:16:20.363 --rc geninfo_unexecuted_blocks=1 00:16:20.363 00:16:20.363 ' 00:16:20.363 21:48:43 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:20.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.363 --rc genhtml_branch_coverage=1 00:16:20.363 --rc genhtml_function_coverage=1 00:16:20.363 --rc genhtml_legend=1 00:16:20.363 --rc geninfo_all_blocks=1 00:16:20.363 --rc geninfo_unexecuted_blocks=1 00:16:20.363 00:16:20.363 ' 00:16:20.363 21:48:43 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:20.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.363 --rc genhtml_branch_coverage=1 00:16:20.363 --rc genhtml_function_coverage=1 00:16:20.363 --rc genhtml_legend=1 00:16:20.363 --rc geninfo_all_blocks=1 00:16:20.363 --rc geninfo_unexecuted_blocks=1 00:16:20.363 00:16:20.363 ' 00:16:20.363 21:48:43 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:20.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.363 --rc genhtml_branch_coverage=1 00:16:20.363 --rc genhtml_function_coverage=1 00:16:20.363 --rc genhtml_legend=1 00:16:20.363 --rc geninfo_all_blocks=1 00:16:20.363 --rc geninfo_unexecuted_blocks=1 00:16:20.363 00:16:20.363 ' 00:16:20.363 21:48:43 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:20.363 21:48:43 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:20.363 21:48:43 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:20.363 21:48:43 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:20.363 21:48:43 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:20.363 21:48:43 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:20.363 21:48:43 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:20.363 21:48:43 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:20.363 21:48:43 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:20.363 21:48:43 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:16:20.363 21:48:43 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:16:20.363 21:48:43 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:16:20.363 21:48:43 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:16:20.363 21:48:43 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:16:20.363 21:48:43 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:16:20.363 21:48:43 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:16:20.363 21:48:43 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:16:20.363 21:48:43 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:16:20.363 21:48:43 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:16:20.363 21:48:43 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:16:20.363 21:48:43 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:20.363 21:48:43 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:20.363 21:48:43 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:20.363 ************************************ 00:16:20.363 START TEST test_save_ublk_config 00:16:20.363 ************************************ 00:16:20.363 21:48:43 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:16:20.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.363 21:48:43 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:16:20.363 21:48:43 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=84486 00:16:20.363 21:48:43 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:16:20.363 21:48:43 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 84486 00:16:20.363 21:48:43 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 84486 ']' 00:16:20.363 21:48:43 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.363 21:48:43 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:20.363 21:48:43 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:16:20.363 21:48:43 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.363 21:48:43 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:20.363 21:48:43 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:20.363 [2024-11-27 21:48:43.244051] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:16:20.363 [2024-11-27 21:48:43.244323] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84486 ] 00:16:20.363 [2024-11-27 21:48:43.390514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.363 [2024-11-27 21:48:43.430758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.306 21:48:44 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:21.306 21:48:44 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:21.306 21:48:44 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:16:21.306 21:48:44 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:16:21.306 21:48:44 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.306 21:48:44 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:21.306 [2024-11-27 21:48:44.102365] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:21.306 [2024-11-27 21:48:44.103589] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:21.306 malloc0 00:16:21.306 [2024-11-27 21:48:44.142505] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:21.306 [2024-11-27 21:48:44.142598] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:21.306 [2024-11-27 21:48:44.142610] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:21.306 [2024-11-27 21:48:44.142631] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:21.306 [2024-11-27 21:48:44.151488] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:21.306 [2024-11-27 21:48:44.151526] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:21.306 [2024-11-27 21:48:44.158373] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:21.306 [2024-11-27 21:48:44.158511] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:21.306 [2024-11-27 21:48:44.175372] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:21.306 0 00:16:21.306 21:48:44 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.306 21:48:44 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:16:21.306 21:48:44 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.306 21:48:44 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:21.567 21:48:44 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.567 21:48:44 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:16:21.567 "subsystems": [ 00:16:21.567 { 00:16:21.567 "subsystem": "fsdev", 00:16:21.567 "config": [ 00:16:21.567 { 00:16:21.567 "method": "fsdev_set_opts", 00:16:21.567 "params": { 00:16:21.567 "fsdev_io_pool_size": 65535, 00:16:21.567 "fsdev_io_cache_size": 256 00:16:21.567 } 00:16:21.567 } 00:16:21.567 ] 00:16:21.567 }, 00:16:21.567 { 00:16:21.567 "subsystem": "keyring", 00:16:21.567 "config": [] 00:16:21.567 }, 00:16:21.567 { 00:16:21.567 "subsystem": "iobuf", 00:16:21.567 "config": [ 00:16:21.567 { 00:16:21.567 "method": "iobuf_set_options", 00:16:21.567 "params": { 00:16:21.567 "small_pool_count": 8192, 00:16:21.567 "large_pool_count": 1024, 00:16:21.567 "small_bufsize": 8192, 00:16:21.567 "large_bufsize": 135168, 00:16:21.567 "enable_numa": false 00:16:21.567 } 00:16:21.567 } 00:16:21.567 ] 00:16:21.567 }, 00:16:21.567 { 00:16:21.567 "subsystem": "sock", 00:16:21.567 "config": [ 00:16:21.567 { 00:16:21.567 "method": "sock_set_default_impl", 00:16:21.567 "params": { 00:16:21.567 "impl_name": "posix" 00:16:21.567 } 00:16:21.567 }, 00:16:21.567 { 00:16:21.567 "method": "sock_impl_set_options", 00:16:21.567 "params": { 00:16:21.567 "impl_name": "ssl", 00:16:21.567 "recv_buf_size": 4096, 00:16:21.567 "send_buf_size": 4096, 00:16:21.567 "enable_recv_pipe": true, 00:16:21.567 "enable_quickack": false, 00:16:21.567 "enable_placement_id": 0, 00:16:21.567 "enable_zerocopy_send_server": true, 00:16:21.567 "enable_zerocopy_send_client": false, 00:16:21.567 "zerocopy_threshold": 0, 00:16:21.567 "tls_version": 0, 00:16:21.567 "enable_ktls": false 00:16:21.567 } 00:16:21.567 }, 00:16:21.567 { 00:16:21.567 "method": "sock_impl_set_options", 00:16:21.567 "params": { 00:16:21.567 "impl_name": "posix", 00:16:21.567 "recv_buf_size": 2097152, 00:16:21.567 "send_buf_size": 2097152, 00:16:21.567 "enable_recv_pipe": true, 00:16:21.567 "enable_quickack": false, 00:16:21.567 "enable_placement_id": 0, 00:16:21.567 "enable_zerocopy_send_server": true, 00:16:21.567 "enable_zerocopy_send_client": false, 00:16:21.567 "zerocopy_threshold": 0, 00:16:21.567 "tls_version": 0, 00:16:21.567 "enable_ktls": false 00:16:21.567 } 00:16:21.567 } 00:16:21.567 ] 00:16:21.567 }, 00:16:21.567 { 00:16:21.567 "subsystem": "vmd", 00:16:21.567 "config": [] 00:16:21.567 }, 00:16:21.567 { 00:16:21.567 "subsystem": "accel", 00:16:21.567 "config": [ 00:16:21.567 { 00:16:21.567 "method": "accel_set_options", 00:16:21.567 "params": { 00:16:21.567 "small_cache_size": 128, 00:16:21.567 "large_cache_size": 16, 00:16:21.567 "task_count": 2048, 00:16:21.567 "sequence_count": 2048, 00:16:21.567 "buf_count": 2048 00:16:21.567 } 00:16:21.567 } 00:16:21.567 ] 00:16:21.567 }, 00:16:21.567 { 00:16:21.567 "subsystem": "bdev", 00:16:21.567 "config": [ 00:16:21.567 { 00:16:21.567 "method": "bdev_set_options", 00:16:21.567 "params": { 00:16:21.567 "bdev_io_pool_size": 65535, 00:16:21.567 "bdev_io_cache_size": 256, 00:16:21.567 "bdev_auto_examine": true, 00:16:21.567 "iobuf_small_cache_size": 128, 00:16:21.567 "iobuf_large_cache_size": 16 00:16:21.567 } 00:16:21.567 }, 00:16:21.567 { 00:16:21.567 "method": "bdev_raid_set_options", 00:16:21.567 "params": { 00:16:21.567 "process_window_size_kb": 1024, 00:16:21.567 "process_max_bandwidth_mb_sec": 0 00:16:21.568 } 00:16:21.568 }, 00:16:21.568 { 00:16:21.568 "method": "bdev_iscsi_set_options", 00:16:21.568 "params": { 00:16:21.568 "timeout_sec": 30 00:16:21.568 } 00:16:21.568 }, 00:16:21.568 { 00:16:21.568 "method": "bdev_nvme_set_options", 00:16:21.568 "params": { 00:16:21.568 "action_on_timeout": "none", 00:16:21.568 "timeout_us": 0, 00:16:21.568 "timeout_admin_us": 0, 00:16:21.568 "keep_alive_timeout_ms": 10000, 00:16:21.568 "arbitration_burst": 0, 00:16:21.568 "low_priority_weight": 0, 00:16:21.568 "medium_priority_weight": 0, 00:16:21.568 "high_priority_weight": 0, 00:16:21.568 "nvme_adminq_poll_period_us": 10000, 00:16:21.568 "nvme_ioq_poll_period_us": 0, 00:16:21.568 "io_queue_requests": 0, 00:16:21.568 "delay_cmd_submit": true, 00:16:21.568 "transport_retry_count": 4, 00:16:21.568 "bdev_retry_count": 3, 00:16:21.568 "transport_ack_timeout": 0, 00:16:21.568 "ctrlr_loss_timeout_sec": 0, 00:16:21.568 "reconnect_delay_sec": 0, 00:16:21.568 "fast_io_fail_timeout_sec": 0, 00:16:21.568 "disable_auto_failback": false, 00:16:21.568 "generate_uuids": false, 00:16:21.568 "transport_tos": 0, 00:16:21.568 "nvme_error_stat": false, 00:16:21.568 "rdma_srq_size": 0, 00:16:21.568 "io_path_stat": false, 00:16:21.568 "allow_accel_sequence": false, 00:16:21.568 "rdma_max_cq_size": 0, 00:16:21.568 "rdma_cm_event_timeout_ms": 0, 00:16:21.568 "dhchap_digests": [ 00:16:21.568 "sha256", 00:16:21.568 "sha384", 00:16:21.568 "sha512" 00:16:21.568 ], 00:16:21.568 "dhchap_dhgroups": [ 00:16:21.568 "null", 00:16:21.568 "ffdhe2048", 00:16:21.568 "ffdhe3072", 00:16:21.568 "ffdhe4096", 00:16:21.568 "ffdhe6144", 00:16:21.568 "ffdhe8192" 00:16:21.568 ] 00:16:21.568 } 00:16:21.568 }, 00:16:21.568 { 00:16:21.568 "method": "bdev_nvme_set_hotplug", 00:16:21.568 "params": { 00:16:21.568 "period_us": 100000, 00:16:21.568 "enable": false 00:16:21.568 } 00:16:21.568 }, 00:16:21.568 { 00:16:21.568 "method": "bdev_malloc_create", 00:16:21.568 "params": { 00:16:21.568 "name": "malloc0", 00:16:21.568 "num_blocks": 8192, 00:16:21.568 "block_size": 4096, 00:16:21.568 "physical_block_size": 4096, 00:16:21.568 "uuid": "a540537c-de38-4b91-b7ef-55252256d0d8", 00:16:21.568 "optimal_io_boundary": 0, 00:16:21.568 "md_size": 0, 00:16:21.568 "dif_type": 0, 00:16:21.568 "dif_is_head_of_md": false, 00:16:21.568 "dif_pi_format": 0 00:16:21.568 } 00:16:21.568 }, 00:16:21.568 { 00:16:21.568 "method": "bdev_wait_for_examine" 00:16:21.568 } 00:16:21.568 ] 00:16:21.568 }, 00:16:21.568 { 00:16:21.568 "subsystem": "scsi", 00:16:21.568 "config": null 00:16:21.568 }, 00:16:21.568 { 00:16:21.568 "subsystem": "scheduler", 00:16:21.568 "config": [ 00:16:21.568 { 00:16:21.568 "method": "framework_set_scheduler", 00:16:21.568 "params": { 00:16:21.568 "name": "static" 00:16:21.568 } 00:16:21.568 } 00:16:21.568 ] 00:16:21.568 }, 00:16:21.568 { 00:16:21.568 "subsystem": "vhost_scsi", 00:16:21.568 "config": [] 00:16:21.568 }, 00:16:21.568 { 00:16:21.568 "subsystem": "vhost_blk", 00:16:21.568 "config": [] 00:16:21.568 }, 00:16:21.568 { 00:16:21.568 "subsystem": "ublk", 00:16:21.568 "config": [ 00:16:21.568 { 00:16:21.568 "method": "ublk_create_target", 00:16:21.568 "params": { 00:16:21.568 "cpumask": "1" 00:16:21.568 } 00:16:21.568 }, 00:16:21.568 { 00:16:21.568 "method": "ublk_start_disk", 00:16:21.568 "params": { 00:16:21.568 "bdev_name": "malloc0", 00:16:21.568 "ublk_id": 0, 00:16:21.568 "num_queues": 1, 00:16:21.568 "queue_depth": 128 00:16:21.568 } 00:16:21.568 } 00:16:21.568 ] 00:16:21.568 }, 00:16:21.568 { 00:16:21.568 "subsystem": "nbd", 00:16:21.568 "config": [] 00:16:21.568 }, 00:16:21.568 { 00:16:21.568 "subsystem": "nvmf", 00:16:21.568 "config": [ 00:16:21.568 { 00:16:21.568 "method": "nvmf_set_config", 00:16:21.568 "params": { 00:16:21.568 "discovery_filter": "match_any", 00:16:21.568 "admin_cmd_passthru": { 00:16:21.568 "identify_ctrlr": false 00:16:21.568 }, 00:16:21.568 "dhchap_digests": [ 00:16:21.568 "sha256", 00:16:21.568 "sha384", 00:16:21.568 "sha512" 00:16:21.568 ], 00:16:21.568 "dhchap_dhgroups": [ 00:16:21.568 "null", 00:16:21.568 "ffdhe2048", 00:16:21.568 "ffdhe3072", 00:16:21.568 "ffdhe4096", 00:16:21.568 "ffdhe6144", 00:16:21.568 "ffdhe8192" 00:16:21.568 ] 00:16:21.568 } 00:16:21.568 }, 00:16:21.568 { 00:16:21.568 "method": "nvmf_set_max_subsystems", 00:16:21.568 "params": { 00:16:21.568 "max_subsystems": 1024 00:16:21.568 } 00:16:21.568 }, 00:16:21.568 { 00:16:21.568 "method": "nvmf_set_crdt", 00:16:21.568 "params": { 00:16:21.568 "crdt1": 0, 00:16:21.568 "crdt2": 0, 00:16:21.568 "crdt3": 0 00:16:21.568 } 00:16:21.568 } 00:16:21.568 ] 00:16:21.568 }, 00:16:21.568 { 00:16:21.568 "subsystem": "iscsi", 00:16:21.568 "config": [ 00:16:21.568 { 00:16:21.568 "method": "iscsi_set_options", 00:16:21.568 "params": { 00:16:21.568 "node_base": "iqn.2016-06.io.spdk", 00:16:21.568 "max_sessions": 128, 00:16:21.568 "max_connections_per_session": 2, 00:16:21.568 "max_queue_depth": 64, 00:16:21.568 "default_time2wait": 2, 00:16:21.568 "default_time2retain": 20, 00:16:21.568 "first_burst_length": 8192, 00:16:21.568 "immediate_data": true, 00:16:21.568 "allow_duplicated_isid": false, 00:16:21.568 "error_recovery_level": 0, 00:16:21.568 "nop_timeout": 60, 00:16:21.568 "nop_in_interval": 30, 00:16:21.568 "disable_chap": false, 00:16:21.568 "require_chap": false, 00:16:21.568 "mutual_chap": false, 00:16:21.568 "chap_group": 0, 00:16:21.568 "max_large_datain_per_connection": 64, 00:16:21.568 "max_r2t_per_connection": 4, 00:16:21.568 "pdu_pool_size": 36864, 00:16:21.568 "immediate_data_pool_size": 16384, 00:16:21.568 "data_out_pool_size": 2048 00:16:21.568 } 00:16:21.568 } 00:16:21.568 ] 00:16:21.568 } 00:16:21.568 ] 00:16:21.568 }' 00:16:21.568 21:48:44 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 84486 00:16:21.568 21:48:44 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 84486 ']' 00:16:21.568 21:48:44 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 84486 00:16:21.568 21:48:44 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:21.568 21:48:44 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:21.568 21:48:44 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84486 00:16:21.568 21:48:44 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:21.568 killing process with pid 84486 00:16:21.568 21:48:44 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:21.568 21:48:44 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84486' 00:16:21.568 21:48:44 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 84486 00:16:21.568 21:48:44 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 84486 00:16:21.829 [2024-11-27 21:48:44.894648] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:21.829 [2024-11-27 21:48:44.934385] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:21.829 [2024-11-27 21:48:44.934567] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:21.829 [2024-11-27 21:48:44.943384] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:21.829 [2024-11-27 21:48:44.943485] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:21.829 [2024-11-27 21:48:44.943495] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:21.829 [2024-11-27 21:48:44.943530] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:21.829 [2024-11-27 21:48:44.943690] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:22.773 21:48:45 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=84524 00:16:22.773 21:48:45 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 84524 00:16:22.773 21:48:45 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 84524 ']' 00:16:22.773 21:48:45 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.773 21:48:45 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:16:22.773 21:48:45 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:22.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.774 21:48:45 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.774 21:48:45 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:22.774 21:48:45 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:22.774 21:48:45 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:16:22.774 "subsystems": [ 00:16:22.774 { 00:16:22.774 "subsystem": "fsdev", 00:16:22.774 "config": [ 00:16:22.774 { 00:16:22.774 "method": "fsdev_set_opts", 00:16:22.774 "params": { 00:16:22.774 "fsdev_io_pool_size": 65535, 00:16:22.774 "fsdev_io_cache_size": 256 00:16:22.774 } 00:16:22.774 } 00:16:22.774 ] 00:16:22.774 }, 00:16:22.774 { 00:16:22.774 "subsystem": "keyring", 00:16:22.774 "config": [] 00:16:22.774 }, 00:16:22.774 { 00:16:22.774 "subsystem": "iobuf", 00:16:22.774 "config": [ 00:16:22.774 { 00:16:22.774 "method": "iobuf_set_options", 00:16:22.774 "params": { 00:16:22.774 "small_pool_count": 8192, 00:16:22.774 "large_pool_count": 1024, 00:16:22.774 "small_bufsize": 8192, 00:16:22.774 "large_bufsize": 135168, 00:16:22.774 "enable_numa": false 00:16:22.774 } 00:16:22.774 } 00:16:22.774 ] 00:16:22.774 }, 00:16:22.774 { 00:16:22.774 "subsystem": "sock", 00:16:22.774 "config": [ 00:16:22.774 { 00:16:22.774 "method": "sock_set_default_impl", 00:16:22.774 "params": { 00:16:22.774 "impl_name": "posix" 00:16:22.774 } 00:16:22.774 }, 00:16:22.774 { 00:16:22.774 "method": "sock_impl_set_options", 00:16:22.774 "params": { 00:16:22.774 "impl_name": "ssl", 00:16:22.774 "recv_buf_size": 4096, 00:16:22.774 "send_buf_size": 4096, 00:16:22.774 "enable_recv_pipe": true, 00:16:22.774 "enable_quickack": false, 00:16:22.774 "enable_placement_id": 0, 00:16:22.774 "enable_zerocopy_send_server": true, 00:16:22.774 "enable_zerocopy_send_client": false, 00:16:22.774 "zerocopy_threshold": 0, 00:16:22.774 "tls_version": 0, 00:16:22.774 "enable_ktls": false 00:16:22.774 } 00:16:22.774 }, 00:16:22.774 { 00:16:22.774 "method": "sock_impl_set_options", 00:16:22.774 "params": { 00:16:22.774 "impl_name": "posix", 00:16:22.774 "recv_buf_size": 2097152, 00:16:22.774 "send_buf_size": 2097152, 00:16:22.774 "enable_recv_pipe": true, 00:16:22.774 "enable_quickack": false, 00:16:22.774 "enable_placement_id": 0, 00:16:22.774 "enable_zerocopy_send_server": true, 00:16:22.774 "enable_zerocopy_send_client": false, 00:16:22.774 "zerocopy_threshold": 0, 00:16:22.774 "tls_version": 0, 00:16:22.774 "enable_ktls": false 00:16:22.774 } 00:16:22.774 } 00:16:22.774 ] 00:16:22.774 }, 00:16:22.774 { 00:16:22.774 "subsystem": "vmd", 00:16:22.774 "config": [] 00:16:22.774 }, 00:16:22.774 { 00:16:22.774 "subsystem": "accel", 00:16:22.774 "config": [ 00:16:22.774 { 00:16:22.774 "method": "accel_set_options", 00:16:22.774 "params": { 00:16:22.774 "small_cache_size": 128, 00:16:22.774 "large_cache_size": 16, 00:16:22.774 "task_count": 2048, 00:16:22.774 "sequence_count": 2048, 00:16:22.774 "buf_count": 2048 00:16:22.774 } 00:16:22.774 } 00:16:22.774 ] 00:16:22.774 }, 00:16:22.774 { 00:16:22.774 "subsystem": "bdev", 00:16:22.774 "config": [ 00:16:22.774 { 00:16:22.774 "method": "bdev_set_options", 00:16:22.774 "params": { 00:16:22.774 "bdev_io_pool_size": 65535, 00:16:22.774 "bdev_io_cache_size": 256, 00:16:22.774 "bdev_auto_examine": true, 00:16:22.774 "iobuf_small_cache_size": 128, 00:16:22.774 "iobuf_large_cache_size": 16 00:16:22.774 } 00:16:22.774 }, 00:16:22.774 { 00:16:22.774 "method": "bdev_raid_set_options", 00:16:22.774 "params": { 00:16:22.774 "process_window_size_kb": 1024, 00:16:22.774 "process_max_bandwidth_mb_sec": 0 00:16:22.774 } 00:16:22.774 }, 00:16:22.774 { 00:16:22.774 "method": "bdev_iscsi_set_options", 00:16:22.774 "params": { 00:16:22.774 "timeout_sec": 30 00:16:22.774 } 00:16:22.774 }, 00:16:22.774 { 00:16:22.774 "method": "bdev_nvme_set_options", 00:16:22.774 "params": { 00:16:22.774 "action_on_timeout": "none", 00:16:22.774 "timeout_us": 0, 00:16:22.774 "timeout_admin_us": 0, 00:16:22.774 "keep_alive_timeout_ms": 10000, 00:16:22.774 "arbitration_burst": 0, 00:16:22.774 "low_priority_weight": 0, 00:16:22.774 "medium_priority_weight": 0, 00:16:22.774 "high_priority_weight": 0, 00:16:22.774 "nvme_adminq_poll_period_us": 10000, 00:16:22.774 "nvme_ioq_poll_period_us": 0, 00:16:22.774 "io_queue_requests": 0, 00:16:22.774 "delay_cmd_submit": true, 00:16:22.774 "transport_retry_count": 4, 00:16:22.774 "bdev_retry_count": 3, 00:16:22.774 "transport_ack_timeout": 0, 00:16:22.774 "ctrlr_loss_timeout_sec": 0, 00:16:22.774 "reconnect_delay_sec": 0, 00:16:22.774 "fast_io_fail_timeout_sec": 0, 00:16:22.774 "disable_auto_failback": false, 00:16:22.774 "generate_uuids": false, 00:16:22.774 "transport_tos": 0, 00:16:22.774 "nvme_error_stat": false, 00:16:22.774 "rdma_srq_size": 0, 00:16:22.774 "io_path_stat": false, 00:16:22.774 "allow_accel_sequence": false, 00:16:22.774 "rdma_max_cq_size": 0, 00:16:22.774 "rdma_cm_event_timeout_ms": 0, 00:16:22.774 "dhchap_digests": [ 00:16:22.774 "sha256", 00:16:22.774 "sha384", 00:16:22.774 "sha512" 00:16:22.774 ], 00:16:22.774 "dhchap_dhgroups": [ 00:16:22.774 "null", 00:16:22.774 "ffdhe2048", 00:16:22.774 "ffdhe3072", 00:16:22.774 "ffdhe4096", 00:16:22.774 "ffdhe6144", 00:16:22.774 "ffdhe8192" 00:16:22.774 ] 00:16:22.774 } 00:16:22.774 }, 00:16:22.774 { 00:16:22.774 "method": "bdev_nvme_set_hotplug", 00:16:22.774 "params": { 00:16:22.774 "period_us": 100000, 00:16:22.774 "enable": false 00:16:22.774 } 00:16:22.774 }, 00:16:22.774 { 00:16:22.774 "method": "bdev_malloc_create", 00:16:22.774 "params": { 00:16:22.774 "name": "malloc0", 00:16:22.774 "num_blocks": 8192, 00:16:22.774 "block_size": 4096, 00:16:22.774 "physical_block_size": 4096, 00:16:22.774 "uuid": "a540537c-de38-4b91-b7ef-55252256d0d8", 00:16:22.774 "optimal_io_boundary": 0, 00:16:22.774 "md_size": 0, 00:16:22.774 "dif_type": 0, 00:16:22.774 "dif_is_head_of_md": false, 00:16:22.774 "dif_pi_format": 0 00:16:22.774 } 00:16:22.774 }, 00:16:22.774 { 00:16:22.774 "method": "bdev_wait_for_examine" 00:16:22.774 } 00:16:22.774 ] 00:16:22.774 }, 00:16:22.774 { 00:16:22.774 "subsystem": "scsi", 00:16:22.774 "config": null 00:16:22.774 }, 00:16:22.774 { 00:16:22.774 "subsystem": "scheduler", 00:16:22.774 "config": [ 00:16:22.774 { 00:16:22.774 "method": "framework_set_scheduler", 00:16:22.774 "params": { 00:16:22.774 "name": "static" 00:16:22.774 } 00:16:22.774 } 00:16:22.774 ] 00:16:22.774 }, 00:16:22.774 { 00:16:22.774 "subsystem": "vhost_scsi", 00:16:22.774 "config": [] 00:16:22.774 }, 00:16:22.774 { 00:16:22.774 "subsystem": "vhost_blk", 00:16:22.774 "config": [] 00:16:22.774 }, 00:16:22.774 { 00:16:22.774 "subsystem": "ublk", 00:16:22.774 "config": [ 00:16:22.774 { 00:16:22.774 "method": "ublk_create_target", 00:16:22.774 "params": { 00:16:22.774 "cpumask": "1" 00:16:22.774 } 00:16:22.774 }, 00:16:22.774 { 00:16:22.774 "method": "ublk_start_disk", 00:16:22.774 "params": { 00:16:22.774 "bdev_name": "malloc0", 00:16:22.774 "ublk_id": 0, 00:16:22.774 "num_queues": 1, 00:16:22.774 "queue_depth": 128 00:16:22.774 } 00:16:22.774 } 00:16:22.774 ] 00:16:22.774 }, 00:16:22.774 { 00:16:22.774 "subsystem": "nbd", 00:16:22.774 "config": [] 00:16:22.774 }, 00:16:22.774 { 00:16:22.774 "subsystem": "nvmf", 00:16:22.774 "config": [ 00:16:22.774 { 00:16:22.774 "method": "nvmf_set_config", 00:16:22.774 "params": { 00:16:22.774 "discovery_filter": "match_any", 00:16:22.774 "admin_cmd_passthru": { 00:16:22.774 "identify_ctrlr": false 00:16:22.774 }, 00:16:22.774 "dhchap_digests": [ 00:16:22.774 "sha256", 00:16:22.774 "sha384", 00:16:22.774 "sha512" 00:16:22.774 ], 00:16:22.774 "dhchap_dhgroups": [ 00:16:22.774 "null", 00:16:22.774 "ffdhe2048", 00:16:22.774 "ffdhe3072", 00:16:22.774 "ffdhe4096", 00:16:22.774 "ffdhe6144", 00:16:22.774 "ffdhe8192" 00:16:22.774 ] 00:16:22.774 } 00:16:22.774 }, 00:16:22.774 { 00:16:22.774 "method": "nvmf_set_max_subsystems", 00:16:22.774 "params": { 00:16:22.775 "max_subsystems": 1024 00:16:22.775 } 00:16:22.775 }, 00:16:22.775 { 00:16:22.775 "method": "nvmf_set_crdt", 00:16:22.775 "params": { 00:16:22.775 "crdt1": 0, 00:16:22.775 "crdt2": 0, 00:16:22.775 "crdt3": 0 00:16:22.775 } 00:16:22.775 } 00:16:22.775 ] 00:16:22.775 }, 00:16:22.775 { 00:16:22.775 "subsystem": "iscsi", 00:16:22.775 "config": [ 00:16:22.775 { 00:16:22.775 "method": "iscsi_set_options", 00:16:22.775 "params": { 00:16:22.775 "node_base": "iqn.2016-06.io.spdk", 00:16:22.775 "max_sessions": 128, 00:16:22.775 "max_connections_per_session": 2, 00:16:22.775 "max_queue_depth": 64, 00:16:22.775 "default_time2wait": 2, 00:16:22.775 "default_time2retain": 20, 00:16:22.775 "first_burst_length": 8192, 00:16:22.775 "immediate_data": true, 00:16:22.775 "allow_duplicated_isid": false, 00:16:22.775 "error_recovery_level": 0, 00:16:22.775 "nop_timeout": 60, 00:16:22.775 "nop_in_interval": 30, 00:16:22.775 "disable_chap": false, 00:16:22.775 "require_chap": false, 00:16:22.775 "mutual_chap": false, 00:16:22.775 "chap_group": 0, 00:16:22.775 "max_large_datain_per_connection": 64, 00:16:22.775 "max_r2t_per_connection": 4, 00:16:22.775 "pdu_pool_size": 36864, 00:16:22.775 "immediate_data_pool_size": 16384, 00:16:22.775 "data_out_pool_size": 2048 00:16:22.775 } 00:16:22.775 } 00:16:22.775 ] 00:16:22.775 } 00:16:22.775 ] 00:16:22.775 }' 00:16:22.775 [2024-11-27 21:48:45.626890] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:16:22.775 [2024-11-27 21:48:45.627027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84524 ] 00:16:22.775 [2024-11-27 21:48:45.768687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.775 [2024-11-27 21:48:45.801593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.036 [2024-11-27 21:48:46.151353] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:23.036 [2024-11-27 21:48:46.151599] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:23.296 [2024-11-27 21:48:46.159468] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:23.296 [2024-11-27 21:48:46.159521] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:23.296 [2024-11-27 21:48:46.159527] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:23.296 [2024-11-27 21:48:46.159535] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:23.296 [2024-11-27 21:48:46.168439] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:23.296 [2024-11-27 21:48:46.168459] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:23.296 [2024-11-27 21:48:46.175359] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:23.296 [2024-11-27 21:48:46.175435] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:23.296 [2024-11-27 21:48:46.192354] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:23.556 21:48:46 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:23.556 21:48:46 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:23.556 21:48:46 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:16:23.556 21:48:46 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.556 21:48:46 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:23.556 21:48:46 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:16:23.556 21:48:46 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.556 21:48:46 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:23.556 21:48:46 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:16:23.556 21:48:46 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 84524 00:16:23.556 21:48:46 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 84524 ']' 00:16:23.556 21:48:46 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 84524 00:16:23.556 21:48:46 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:23.556 21:48:46 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:23.556 21:48:46 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84524 00:16:23.556 21:48:46 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:23.556 21:48:46 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:23.556 killing process with pid 84524 00:16:23.556 21:48:46 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84524' 00:16:23.556 21:48:46 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 84524 00:16:23.556 21:48:46 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 84524 00:16:23.817 [2024-11-27 21:48:46.750992] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:23.817 [2024-11-27 21:48:46.783364] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:23.817 [2024-11-27 21:48:46.783470] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:23.817 [2024-11-27 21:48:46.793374] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:23.817 [2024-11-27 21:48:46.793417] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:23.817 [2024-11-27 21:48:46.793428] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:23.817 [2024-11-27 21:48:46.793449] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:23.817 [2024-11-27 21:48:46.793558] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:24.078 21:48:47 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:16:24.078 00:16:24.078 real 0m3.978s 00:16:24.078 user 0m2.615s 00:16:24.078 sys 0m2.005s 00:16:24.078 21:48:47 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:24.079 ************************************ 00:16:24.079 END TEST test_save_ublk_config 00:16:24.079 ************************************ 00:16:24.079 21:48:47 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:24.079 21:48:47 ublk -- ublk/ublk.sh@139 -- # spdk_pid=84578 00:16:24.079 21:48:47 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:24.079 21:48:47 ublk -- ublk/ublk.sh@141 -- # waitforlisten 84578 00:16:24.079 21:48:47 ublk -- common/autotest_common.sh@835 -- # '[' -z 84578 ']' 00:16:24.079 21:48:47 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.079 21:48:47 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:24.079 21:48:47 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.079 21:48:47 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:24.079 21:48:47 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:24.079 21:48:47 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:24.340 [2024-11-27 21:48:47.270676] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:16:24.340 [2024-11-27 21:48:47.270792] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84578 ] 00:16:24.340 [2024-11-27 21:48:47.412143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:24.340 [2024-11-27 21:48:47.440788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:24.340 [2024-11-27 21:48:47.440865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.284 21:48:48 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:25.284 21:48:48 ublk -- common/autotest_common.sh@868 -- # return 0 00:16:25.284 21:48:48 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:16:25.284 21:48:48 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:25.284 21:48:48 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:25.284 21:48:48 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:25.284 ************************************ 00:16:25.284 START TEST test_create_ublk 00:16:25.284 ************************************ 00:16:25.284 21:48:48 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:16:25.284 21:48:48 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:16:25.284 21:48:48 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.284 21:48:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:25.284 [2024-11-27 21:48:48.124354] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:25.284 [2024-11-27 21:48:48.125600] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:25.284 21:48:48 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.284 21:48:48 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:16:25.284 21:48:48 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:16:25.284 21:48:48 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.284 21:48:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:25.284 21:48:48 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.284 21:48:48 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:16:25.284 21:48:48 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:25.284 21:48:48 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.284 21:48:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:25.284 [2024-11-27 21:48:48.202454] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:25.284 [2024-11-27 21:48:48.202782] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:25.284 [2024-11-27 21:48:48.202791] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:25.284 [2024-11-27 21:48:48.202798] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:25.284 [2024-11-27 21:48:48.210369] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:25.284 [2024-11-27 21:48:48.210393] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:25.284 [2024-11-27 21:48:48.218371] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:25.284 [2024-11-27 21:48:48.218891] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:25.284 [2024-11-27 21:48:48.240362] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:25.284 21:48:48 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.284 21:48:48 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:16:25.284 21:48:48 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:16:25.284 21:48:48 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:16:25.284 21:48:48 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.284 21:48:48 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:25.284 21:48:48 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.284 21:48:48 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:16:25.284 { 00:16:25.284 "ublk_device": "/dev/ublkb0", 00:16:25.284 "id": 0, 00:16:25.284 "queue_depth": 512, 00:16:25.284 "num_queues": 4, 00:16:25.284 "bdev_name": "Malloc0" 00:16:25.284 } 00:16:25.284 ]' 00:16:25.284 21:48:48 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:16:25.284 21:48:48 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:25.284 21:48:48 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:16:25.284 21:48:48 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:16:25.285 21:48:48 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:16:25.285 21:48:48 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:16:25.285 21:48:48 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:16:25.285 21:48:48 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:16:25.285 21:48:48 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:16:25.545 21:48:48 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:25.545 21:48:48 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:16:25.545 21:48:48 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:16:25.545 21:48:48 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:16:25.545 21:48:48 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:16:25.545 21:48:48 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:16:25.545 21:48:48 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:16:25.545 21:48:48 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:16:25.545 21:48:48 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:16:25.545 21:48:48 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:16:25.545 21:48:48 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:25.546 21:48:48 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:25.546 21:48:48 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:16:25.546 fio: verification read phase will never start because write phase uses all of runtime 00:16:25.546 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:16:25.546 fio-3.35 00:16:25.546 Starting 1 process 00:16:37.753 00:16:37.753 fio_test: (groupid=0, jobs=1): err= 0: pid=84620: Wed Nov 27 21:48:58 2024 00:16:37.753 write: IOPS=16.8k, BW=65.5MiB/s (68.7MB/s)(655MiB/10001msec); 0 zone resets 00:16:37.753 clat (usec): min=32, max=4383, avg=58.85, stdev=83.39 00:16:37.753 lat (usec): min=32, max=4383, avg=59.27, stdev=83.40 00:16:37.753 clat percentiles (usec): 00:16:37.753 | 1.00th=[ 34], 5.00th=[ 39], 10.00th=[ 49], 20.00th=[ 52], 00:16:37.753 | 30.00th=[ 54], 40.00th=[ 55], 50.00th=[ 57], 60.00th=[ 58], 00:16:37.753 | 70.00th=[ 59], 80.00th=[ 61], 90.00th=[ 65], 95.00th=[ 69], 00:16:37.753 | 99.00th=[ 79], 99.50th=[ 85], 99.90th=[ 1303], 99.95th=[ 2442], 00:16:37.753 | 99.99th=[ 3621] 00:16:37.753 bw ( KiB/s): min=64712, max=79440, per=100.00%, avg=67216.32, stdev=4006.11, samples=19 00:16:37.753 iops : min=16178, max=19860, avg=16804.05, stdev=1001.54, samples=19 00:16:37.753 lat (usec) : 50=12.00%, 100=87.70%, 250=0.15%, 500=0.02%, 750=0.01% 00:16:37.753 lat (usec) : 1000=0.01% 00:16:37.753 lat (msec) : 2=0.05%, 4=0.06%, 10=0.01% 00:16:37.753 cpu : usr=2.51%, sys=12.55%, ctx=167806, majf=0, minf=794 00:16:37.753 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:37.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:37.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:37.753 issued rwts: total=0,167804,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:37.753 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:37.753 00:16:37.753 Run status group 0 (all jobs): 00:16:37.753 WRITE: bw=65.5MiB/s (68.7MB/s), 65.5MiB/s-65.5MiB/s (68.7MB/s-68.7MB/s), io=655MiB (687MB), run=10001-10001msec 00:16:37.753 00:16:37.753 Disk stats (read/write): 00:16:37.753 ublkb0: ios=0/166098, merge=0/0, ticks=0/8480, in_queue=8480, util=99.06% 00:16:37.753 21:48:58 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:16:37.753 21:48:58 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.753 21:48:58 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:37.753 [2024-11-27 21:48:58.667782] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:37.753 [2024-11-27 21:48:58.706389] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:37.753 [2024-11-27 21:48:58.707057] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:37.753 [2024-11-27 21:48:58.714364] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:37.753 [2024-11-27 21:48:58.714623] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:37.753 [2024-11-27 21:48:58.714641] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:37.753 21:48:58 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.753 21:48:58 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:16:37.753 21:48:58 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:16:37.753 21:48:58 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:16:37.753 21:48:58 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:37.753 21:48:58 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:37.753 21:48:58 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:37.753 21:48:58 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:37.753 21:48:58 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:16:37.753 21:48:58 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.753 21:48:58 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:37.753 [2024-11-27 21:48:58.730437] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:16:37.753 request: 00:16:37.753 { 00:16:37.753 "ublk_id": 0, 00:16:37.753 "method": "ublk_stop_disk", 00:16:37.753 "req_id": 1 00:16:37.753 } 00:16:37.753 Got JSON-RPC error response 00:16:37.753 response: 00:16:37.753 { 00:16:37.753 "code": -19, 00:16:37.753 "message": "No such device" 00:16:37.753 } 00:16:37.753 21:48:58 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:37.753 21:48:58 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:16:37.753 21:48:58 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:37.753 21:48:58 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:37.753 21:48:58 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:37.754 21:48:58 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:16:37.754 21:48:58 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.754 21:48:58 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:37.754 [2024-11-27 21:48:58.746425] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:37.754 [2024-11-27 21:48:58.748181] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:37.754 [2024-11-27 21:48:58.748210] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:37.754 21:48:58 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.754 21:48:58 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:37.754 21:48:58 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.754 21:48:58 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:37.754 21:48:58 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.754 21:48:58 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:16:37.754 21:48:58 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:37.754 21:48:58 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.754 21:48:58 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:37.754 21:48:58 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.754 21:48:58 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:37.754 21:48:58 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:16:37.754 21:48:58 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:37.754 21:48:58 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:37.754 21:48:58 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.754 21:48:58 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:37.754 21:48:58 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.754 21:48:58 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:37.754 21:48:58 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:16:37.754 21:48:58 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:37.754 00:16:37.754 real 0m10.811s 00:16:37.754 user 0m0.570s 00:16:37.754 sys 0m1.328s 00:16:37.754 21:48:58 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:37.754 21:48:58 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:37.754 ************************************ 00:16:37.754 END TEST test_create_ublk 00:16:37.754 ************************************ 00:16:37.754 21:48:58 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:16:37.754 21:48:58 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:37.754 21:48:58 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:37.754 21:48:58 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:37.754 ************************************ 00:16:37.754 START TEST test_create_multi_ublk 00:16:37.754 ************************************ 00:16:37.754 21:48:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:16:37.754 21:48:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:16:37.754 21:48:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.754 21:48:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:37.754 [2024-11-27 21:48:58.973357] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:37.754 [2024-11-27 21:48:58.974486] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:37.754 21:48:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.754 21:48:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:16:37.754 21:48:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:16:37.754 21:48:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:37.754 21:48:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:16:37.754 21:48:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.754 21:48:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:37.754 21:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.754 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:16:37.754 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:37.754 21:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.754 21:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:37.754 [2024-11-27 21:48:59.057491] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:37.754 [2024-11-27 21:48:59.057810] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:37.754 [2024-11-27 21:48:59.057824] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:37.754 [2024-11-27 21:48:59.057838] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:37.754 [2024-11-27 21:48:59.077362] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:37.754 [2024-11-27 21:48:59.077381] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:37.754 [2024-11-27 21:48:59.089361] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:37.754 [2024-11-27 21:48:59.089885] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:37.754 [2024-11-27 21:48:59.116365] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:37.754 21:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.754 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:16:37.754 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:37.754 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:16:37.754 21:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.754 21:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:37.754 21:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.754 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:16:37.754 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:16:37.754 21:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.754 21:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:37.754 [2024-11-27 21:48:59.224454] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:16:37.754 [2024-11-27 21:48:59.224765] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:16:37.754 [2024-11-27 21:48:59.224775] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:37.754 [2024-11-27 21:48:59.224782] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:37.754 [2024-11-27 21:48:59.236368] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:37.754 [2024-11-27 21:48:59.236386] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:37.754 [2024-11-27 21:48:59.248355] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:37.754 [2024-11-27 21:48:59.248867] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:37.754 [2024-11-27 21:48:59.273360] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:37.754 21:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.754 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:16:37.754 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:37.754 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:16:37.754 21:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.754 21:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:37.754 21:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.754 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:16:37.754 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:16:37.754 21:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.754 21:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:37.754 [2024-11-27 21:48:59.380454] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:16:37.754 [2024-11-27 21:48:59.380766] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:16:37.754 [2024-11-27 21:48:59.380779] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:16:37.754 [2024-11-27 21:48:59.380785] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:16:37.754 [2024-11-27 21:48:59.392370] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:37.754 [2024-11-27 21:48:59.392385] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:37.754 [2024-11-27 21:48:59.404366] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:37.754 [2024-11-27 21:48:59.404878] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:16:37.754 [2024-11-27 21:48:59.429369] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:16:37.754 21:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.754 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:16:37.754 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:37.754 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:16:37.754 21:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.754 21:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:37.754 21:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.754 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:16:37.754 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:16:37.754 21:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.754 21:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:37.754 [2024-11-27 21:48:59.536453] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:16:37.754 [2024-11-27 21:48:59.536777] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:16:37.754 [2024-11-27 21:48:59.536789] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:16:37.754 [2024-11-27 21:48:59.536796] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:16:37.754 [2024-11-27 21:48:59.549550] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:37.754 [2024-11-27 21:48:59.549571] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:37.755 [2024-11-27 21:48:59.560365] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:37.755 [2024-11-27 21:48:59.560875] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:16:37.755 [2024-11-27 21:48:59.573376] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:16:37.755 21:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.755 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:16:37.755 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:16:37.755 21:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.755 21:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:37.755 21:48:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.755 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:16:37.755 { 00:16:37.755 "ublk_device": "/dev/ublkb0", 00:16:37.755 "id": 0, 00:16:37.755 "queue_depth": 512, 00:16:37.755 "num_queues": 4, 00:16:37.755 "bdev_name": "Malloc0" 00:16:37.755 }, 00:16:37.755 { 00:16:37.755 "ublk_device": "/dev/ublkb1", 00:16:37.755 "id": 1, 00:16:37.755 "queue_depth": 512, 00:16:37.755 "num_queues": 4, 00:16:37.755 "bdev_name": "Malloc1" 00:16:37.755 }, 00:16:37.755 { 00:16:37.755 "ublk_device": "/dev/ublkb2", 00:16:37.755 "id": 2, 00:16:37.755 "queue_depth": 512, 00:16:37.755 "num_queues": 4, 00:16:37.755 "bdev_name": "Malloc2" 00:16:37.755 }, 00:16:37.755 { 00:16:37.755 "ublk_device": "/dev/ublkb3", 00:16:37.755 "id": 3, 00:16:37.755 "queue_depth": 512, 00:16:37.755 "num_queues": 4, 00:16:37.755 "bdev_name": "Malloc3" 00:16:37.755 } 00:16:37.755 ]' 00:16:37.755 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:16:37.755 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:37.755 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:16:37.755 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:37.755 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:16:37.755 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:16:37.755 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:16:37.755 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:37.755 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:16:37.755 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:37.755 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:16:37.755 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:37.755 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:37.755 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:16:37.755 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:16:37.755 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:16:37.755 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:16:37.755 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:16:37.755 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:37.755 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:16:37.755 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:37.755 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:16:37.755 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:16:37.755 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:37.755 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:16:37.755 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:16:37.755 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:16:37.755 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:16:37.755 21:48:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:37.755 [2024-11-27 21:49:00.235457] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:37.755 [2024-11-27 21:49:00.277932] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:37.755 [2024-11-27 21:49:00.279064] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:37.755 [2024-11-27 21:49:00.283361] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:37.755 [2024-11-27 21:49:00.283600] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:37.755 [2024-11-27 21:49:00.283611] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:37.755 [2024-11-27 21:49:00.298460] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:37.755 [2024-11-27 21:49:00.328901] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:37.755 [2024-11-27 21:49:00.329920] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:37.755 [2024-11-27 21:49:00.339363] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:37.755 [2024-11-27 21:49:00.339599] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:37.755 [2024-11-27 21:49:00.339609] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:37.755 [2024-11-27 21:49:00.355423] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:16:37.755 [2024-11-27 21:49:00.385920] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:37.755 [2024-11-27 21:49:00.386893] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:16:37.755 [2024-11-27 21:49:00.391375] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:37.755 [2024-11-27 21:49:00.391607] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:16:37.755 [2024-11-27 21:49:00.391618] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:37.755 [2024-11-27 21:49:00.407425] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:16:37.755 [2024-11-27 21:49:00.436895] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:37.755 [2024-11-27 21:49:00.437804] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:16:37.755 [2024-11-27 21:49:00.447359] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:37.755 [2024-11-27 21:49:00.447594] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:16:37.755 [2024-11-27 21:49:00.447604] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:16:37.755 [2024-11-27 21:49:00.639415] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:37.755 [2024-11-27 21:49:00.640594] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:37.755 [2024-11-27 21:49:00.640622] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:37.755 21:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:37.756 21:49:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.756 21:49:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:37.756 21:49:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.756 21:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:37.756 21:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:16:37.756 21:49:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.756 21:49:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.013 21:49:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.013 21:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:38.013 21:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:16:38.013 21:49:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.013 21:49:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.013 21:49:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.013 21:49:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:16:38.013 21:49:00 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:38.013 21:49:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.013 21:49:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.013 21:49:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.013 21:49:00 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:38.013 21:49:00 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:16:38.013 21:49:01 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:38.013 21:49:01 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:38.013 21:49:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.013 21:49:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.013 21:49:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.013 21:49:01 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:38.013 21:49:01 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:16:38.013 21:49:01 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:38.013 00:16:38.013 real 0m2.098s 00:16:38.013 user 0m0.799s 00:16:38.013 sys 0m0.133s 00:16:38.013 21:49:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:38.013 21:49:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.014 ************************************ 00:16:38.014 END TEST test_create_multi_ublk 00:16:38.014 ************************************ 00:16:38.014 21:49:01 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:16:38.014 21:49:01 ublk -- ublk/ublk.sh@147 -- # cleanup 00:16:38.014 21:49:01 ublk -- ublk/ublk.sh@130 -- # killprocess 84578 00:16:38.014 21:49:01 ublk -- common/autotest_common.sh@954 -- # '[' -z 84578 ']' 00:16:38.014 21:49:01 ublk -- common/autotest_common.sh@958 -- # kill -0 84578 00:16:38.014 21:49:01 ublk -- common/autotest_common.sh@959 -- # uname 00:16:38.014 21:49:01 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:38.014 21:49:01 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84578 00:16:38.014 21:49:01 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:38.014 21:49:01 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:38.014 killing process with pid 84578 00:16:38.014 21:49:01 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84578' 00:16:38.014 21:49:01 ublk -- common/autotest_common.sh@973 -- # kill 84578 00:16:38.014 21:49:01 ublk -- common/autotest_common.sh@978 -- # wait 84578 00:16:38.271 [2024-11-27 21:49:01.340287] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:38.271 [2024-11-27 21:49:01.340362] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:38.532 00:16:38.532 real 0m18.573s 00:16:38.532 user 0m28.099s 00:16:38.532 sys 0m8.187s 00:16:38.532 21:49:01 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:38.532 21:49:01 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:38.532 ************************************ 00:16:38.532 END TEST ublk 00:16:38.532 ************************************ 00:16:38.532 21:49:01 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:38.532 21:49:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:38.532 21:49:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:38.532 21:49:01 -- common/autotest_common.sh@10 -- # set +x 00:16:38.532 ************************************ 00:16:38.532 START TEST ublk_recovery 00:16:38.532 ************************************ 00:16:38.532 21:49:01 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:38.793 * Looking for test storage... 00:16:38.793 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:38.793 21:49:01 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:38.793 21:49:01 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:38.793 21:49:01 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:16:38.793 21:49:01 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:38.793 21:49:01 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:38.793 21:49:01 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:38.793 21:49:01 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:38.793 21:49:01 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:38.793 21:49:01 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:38.793 21:49:01 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:38.793 21:49:01 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:38.793 21:49:01 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:38.793 21:49:01 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:38.793 21:49:01 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:38.793 21:49:01 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:38.793 21:49:01 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:16:38.793 21:49:01 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:16:38.793 21:49:01 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:38.793 21:49:01 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:38.793 21:49:01 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:16:38.793 21:49:01 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:16:38.793 21:49:01 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:38.793 21:49:01 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:16:38.793 21:49:01 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:38.793 21:49:01 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:16:38.793 21:49:01 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:16:38.793 21:49:01 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:38.793 21:49:01 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:16:38.793 21:49:01 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:38.793 21:49:01 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:38.793 21:49:01 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:38.793 21:49:01 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:16:38.794 21:49:01 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:38.794 21:49:01 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:38.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.794 --rc genhtml_branch_coverage=1 00:16:38.794 --rc genhtml_function_coverage=1 00:16:38.794 --rc genhtml_legend=1 00:16:38.794 --rc geninfo_all_blocks=1 00:16:38.794 --rc geninfo_unexecuted_blocks=1 00:16:38.794 00:16:38.794 ' 00:16:38.794 21:49:01 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:38.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.794 --rc genhtml_branch_coverage=1 00:16:38.794 --rc genhtml_function_coverage=1 00:16:38.794 --rc genhtml_legend=1 00:16:38.794 --rc geninfo_all_blocks=1 00:16:38.794 --rc geninfo_unexecuted_blocks=1 00:16:38.794 00:16:38.794 ' 00:16:38.794 21:49:01 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:38.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.794 --rc genhtml_branch_coverage=1 00:16:38.794 --rc genhtml_function_coverage=1 00:16:38.794 --rc genhtml_legend=1 00:16:38.794 --rc geninfo_all_blocks=1 00:16:38.794 --rc geninfo_unexecuted_blocks=1 00:16:38.794 00:16:38.794 ' 00:16:38.794 21:49:01 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:38.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.794 --rc genhtml_branch_coverage=1 00:16:38.794 --rc genhtml_function_coverage=1 00:16:38.794 --rc genhtml_legend=1 00:16:38.794 --rc geninfo_all_blocks=1 00:16:38.794 --rc geninfo_unexecuted_blocks=1 00:16:38.794 00:16:38.794 ' 00:16:38.794 21:49:01 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:38.794 21:49:01 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:38.794 21:49:01 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:38.794 21:49:01 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:38.794 21:49:01 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:38.794 21:49:01 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:38.794 21:49:01 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:38.794 21:49:01 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:38.794 21:49:01 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:38.794 21:49:01 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:16:38.794 21:49:01 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=84948 00:16:38.794 21:49:01 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:38.794 21:49:01 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 84948 00:16:38.794 21:49:01 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 84948 ']' 00:16:38.794 21:49:01 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.794 21:49:01 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:38.794 21:49:01 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:38.794 21:49:01 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.794 21:49:01 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:38.794 21:49:01 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:38.794 [2024-11-27 21:49:01.843619] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:16:38.794 [2024-11-27 21:49:01.843730] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84948 ] 00:16:39.054 [2024-11-27 21:49:01.980441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:39.054 [2024-11-27 21:49:02.006998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:39.054 [2024-11-27 21:49:02.007038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.626 21:49:02 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:39.626 21:49:02 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:39.626 21:49:02 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:16:39.626 21:49:02 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.626 21:49:02 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:39.626 [2024-11-27 21:49:02.681356] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:39.626 [2024-11-27 21:49:02.682604] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:39.626 21:49:02 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.626 21:49:02 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:39.626 21:49:02 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.626 21:49:02 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:39.626 malloc0 00:16:39.626 21:49:02 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.626 21:49:02 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:16:39.626 21:49:02 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.626 21:49:02 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:39.626 [2024-11-27 21:49:02.721464] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:16:39.626 [2024-11-27 21:49:02.721553] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:16:39.626 [2024-11-27 21:49:02.721560] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:39.626 [2024-11-27 21:49:02.721568] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:39.626 [2024-11-27 21:49:02.730460] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:39.626 [2024-11-27 21:49:02.730482] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:39.626 [2024-11-27 21:49:02.737357] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:39.626 [2024-11-27 21:49:02.737482] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:39.890 [2024-11-27 21:49:02.752362] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:39.890 1 00:16:39.890 21:49:02 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.890 21:49:02 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:16:40.908 21:49:03 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=84981 00:16:40.908 21:49:03 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:16:40.908 21:49:03 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:16:40.908 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:40.908 fio-3.35 00:16:40.908 Starting 1 process 00:16:46.186 21:49:08 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 84948 00:16:46.186 21:49:08 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:16:51.477 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 84948 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:16:51.477 21:49:13 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=85092 00:16:51.477 21:49:13 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:51.477 21:49:13 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:51.477 21:49:13 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 85092 00:16:51.477 21:49:13 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 85092 ']' 00:16:51.477 21:49:13 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.477 21:49:13 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:51.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.477 21:49:13 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.477 21:49:13 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:51.477 21:49:13 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:51.477 [2024-11-27 21:49:13.859038] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:16:51.477 [2024-11-27 21:49:13.859198] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85092 ] 00:16:51.477 [2024-11-27 21:49:14.006662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:51.477 [2024-11-27 21:49:14.039029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:51.477 [2024-11-27 21:49:14.039083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.736 21:49:14 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:51.736 21:49:14 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:51.736 21:49:14 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:16:51.736 21:49:14 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.736 21:49:14 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:51.736 [2024-11-27 21:49:14.704354] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:51.736 [2024-11-27 21:49:14.705407] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:51.736 21:49:14 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.736 21:49:14 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:51.736 21:49:14 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.736 21:49:14 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:51.736 malloc0 00:16:51.736 21:49:14 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.736 21:49:14 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:16:51.736 21:49:14 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.736 21:49:14 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:51.736 [2024-11-27 21:49:14.736746] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:16:51.736 [2024-11-27 21:49:14.736786] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:51.736 [2024-11-27 21:49:14.736794] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:51.736 [2024-11-27 21:49:14.744390] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:51.736 [2024-11-27 21:49:14.744411] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:16:51.736 [2024-11-27 21:49:14.744429] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:16:51.736 [2024-11-27 21:49:14.744491] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:16:51.736 1 00:16:51.736 21:49:14 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.736 21:49:14 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 84981 00:16:51.736 [2024-11-27 21:49:14.752367] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:16:51.736 [2024-11-27 21:49:14.758944] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:16:51.736 [2024-11-27 21:49:14.766584] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:16:51.736 [2024-11-27 21:49:14.766605] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:17:47.974 00:17:47.974 fio_test: (groupid=0, jobs=1): err= 0: pid=84984: Wed Nov 27 21:50:03 2024 00:17:47.974 read: IOPS=25.3k, BW=98.7MiB/s (104MB/s)(5925MiB/60001msec) 00:17:47.974 slat (nsec): min=1395, max=387427, avg=5633.60, stdev=1592.13 00:17:47.974 clat (usec): min=795, max=6009.9k, avg=2474.16, stdev=38406.23 00:17:47.974 lat (usec): min=800, max=6009.9k, avg=2479.79, stdev=38406.22 00:17:47.974 clat percentiles (usec): 00:17:47.974 | 1.00th=[ 1876], 5.00th=[ 2008], 10.00th=[ 2040], 20.00th=[ 2057], 00:17:47.974 | 30.00th=[ 2073], 40.00th=[ 2089], 50.00th=[ 2114], 60.00th=[ 2114], 00:17:47.974 | 70.00th=[ 2147], 80.00th=[ 2180], 90.00th=[ 2212], 95.00th=[ 3097], 00:17:47.974 | 99.00th=[ 4948], 99.50th=[ 5800], 99.90th=[ 7177], 99.95th=[ 8094], 00:17:47.974 | 99.99th=[12387] 00:17:47.974 bw ( KiB/s): min=17520, max=115832, per=100.00%, avg=111369.60, stdev=12609.15, samples=108 00:17:47.974 iops : min= 4380, max=28958, avg=27842.40, stdev=3152.29, samples=108 00:17:47.974 write: IOPS=25.2k, BW=98.6MiB/s (103MB/s)(5918MiB/60001msec); 0 zone resets 00:17:47.974 slat (nsec): min=1496, max=440799, avg=5906.37, stdev=1611.70 00:17:47.974 clat (usec): min=789, max=6010.1k, avg=2579.96, stdev=39649.37 00:17:47.974 lat (usec): min=794, max=6010.1k, avg=2585.87, stdev=39649.37 00:17:47.974 clat percentiles (usec): 00:17:47.974 | 1.00th=[ 1942], 5.00th=[ 2114], 10.00th=[ 2147], 20.00th=[ 2180], 00:17:47.974 | 30.00th=[ 2180], 40.00th=[ 2212], 50.00th=[ 2212], 60.00th=[ 2245], 00:17:47.974 | 70.00th=[ 2245], 80.00th=[ 2278], 90.00th=[ 2311], 95.00th=[ 3032], 00:17:47.974 | 99.00th=[ 4948], 99.50th=[ 5932], 99.90th=[ 7046], 99.95th=[ 8160], 00:17:47.974 | 99.99th=[12387] 00:17:47.974 bw ( KiB/s): min=17840, max=117008, per=100.00%, avg=111220.56, stdev=12562.00, samples=108 00:17:47.974 iops : min= 4460, max=29252, avg=27805.14, stdev=3140.50, samples=108 00:17:47.974 lat (usec) : 1000=0.01% 00:17:47.974 lat (msec) : 2=3.05%, 4=94.47%, 10=2.46%, 20=0.01%, >=2000=0.01% 00:17:47.974 cpu : usr=5.66%, sys=29.82%, ctx=99400, majf=0, minf=13 00:17:47.974 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:17:47.974 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:47.974 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:47.974 issued rwts: total=1516802,1514888,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:47.974 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:47.974 00:17:47.974 Run status group 0 (all jobs): 00:17:47.974 READ: bw=98.7MiB/s (104MB/s), 98.7MiB/s-98.7MiB/s (104MB/s-104MB/s), io=5925MiB (6213MB), run=60001-60001msec 00:17:47.974 WRITE: bw=98.6MiB/s (103MB/s), 98.6MiB/s-98.6MiB/s (103MB/s-103MB/s), io=5918MiB (6205MB), run=60001-60001msec 00:17:47.974 00:17:47.974 Disk stats (read/write): 00:17:47.974 ublkb1: ios=1513712/1511749, merge=0/0, ticks=3659609/3680685, in_queue=7340294, util=99.90% 00:17:47.974 21:50:04 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:17:47.974 21:50:04 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.974 21:50:04 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:47.974 [2024-11-27 21:50:04.012066] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:47.974 [2024-11-27 21:50:04.051375] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:47.974 [2024-11-27 21:50:04.051539] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:47.974 [2024-11-27 21:50:04.060397] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:47.974 [2024-11-27 21:50:04.060509] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:47.974 [2024-11-27 21:50:04.060531] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:47.974 21:50:04 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.974 21:50:04 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:17:47.974 21:50:04 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.974 21:50:04 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:47.974 [2024-11-27 21:50:04.075437] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:47.974 [2024-11-27 21:50:04.076729] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:47.974 [2024-11-27 21:50:04.076764] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:47.974 21:50:04 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.974 21:50:04 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:17:47.974 21:50:04 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:17:47.974 21:50:04 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 85092 00:17:47.974 21:50:04 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 85092 ']' 00:17:47.974 21:50:04 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 85092 00:17:47.974 21:50:04 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:17:47.974 21:50:04 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:47.974 21:50:04 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85092 00:17:47.974 21:50:04 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:47.974 21:50:04 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:47.974 21:50:04 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85092' 00:17:47.974 killing process with pid 85092 00:17:47.974 21:50:04 ublk_recovery -- common/autotest_common.sh@973 -- # kill 85092 00:17:47.974 21:50:04 ublk_recovery -- common/autotest_common.sh@978 -- # wait 85092 00:17:47.974 [2024-11-27 21:50:04.342792] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:47.974 [2024-11-27 21:50:04.342855] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:47.974 00:17:47.974 real 1m3.069s 00:17:47.974 user 1m37.452s 00:17:47.974 sys 0m39.375s 00:17:47.974 21:50:04 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:47.974 ************************************ 00:17:47.974 END TEST ublk_recovery 00:17:47.974 21:50:04 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:47.975 ************************************ 00:17:47.975 21:50:04 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:17:47.975 21:50:04 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:17:47.975 21:50:04 -- spdk/autotest.sh@260 -- # timing_exit lib 00:17:47.975 21:50:04 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:47.975 21:50:04 -- common/autotest_common.sh@10 -- # set +x 00:17:47.975 21:50:04 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:17:47.975 21:50:04 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:17:47.975 21:50:04 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:17:47.975 21:50:04 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:47.975 21:50:04 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:47.975 21:50:04 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:17:47.975 21:50:04 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:17:47.975 21:50:04 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:17:47.975 21:50:04 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:17:47.975 21:50:04 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:17:47.975 21:50:04 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:47.975 21:50:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:47.975 21:50:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:47.975 21:50:04 -- common/autotest_common.sh@10 -- # set +x 00:17:47.975 ************************************ 00:17:47.975 START TEST ftl 00:17:47.975 ************************************ 00:17:47.975 21:50:04 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:47.975 * Looking for test storage... 00:17:47.975 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:47.975 21:50:04 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:47.975 21:50:04 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:17:47.975 21:50:04 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:47.975 21:50:04 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:47.975 21:50:04 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:47.975 21:50:04 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:47.975 21:50:04 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:47.975 21:50:04 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:17:47.975 21:50:04 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:17:47.975 21:50:04 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:17:47.975 21:50:04 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:17:47.975 21:50:04 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:17:47.975 21:50:04 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:17:47.975 21:50:04 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:17:47.975 21:50:04 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:47.975 21:50:04 ftl -- scripts/common.sh@344 -- # case "$op" in 00:17:47.975 21:50:04 ftl -- scripts/common.sh@345 -- # : 1 00:17:47.975 21:50:04 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:47.975 21:50:04 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:47.975 21:50:04 ftl -- scripts/common.sh@365 -- # decimal 1 00:17:47.975 21:50:04 ftl -- scripts/common.sh@353 -- # local d=1 00:17:47.975 21:50:04 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:47.975 21:50:04 ftl -- scripts/common.sh@355 -- # echo 1 00:17:47.975 21:50:04 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:17:47.975 21:50:04 ftl -- scripts/common.sh@366 -- # decimal 2 00:17:47.975 21:50:04 ftl -- scripts/common.sh@353 -- # local d=2 00:17:47.975 21:50:04 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:47.975 21:50:04 ftl -- scripts/common.sh@355 -- # echo 2 00:17:47.975 21:50:04 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:17:47.975 21:50:04 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:47.975 21:50:04 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:47.975 21:50:04 ftl -- scripts/common.sh@368 -- # return 0 00:17:47.975 21:50:04 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:47.975 21:50:04 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:47.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.975 --rc genhtml_branch_coverage=1 00:17:47.975 --rc genhtml_function_coverage=1 00:17:47.975 --rc genhtml_legend=1 00:17:47.975 --rc geninfo_all_blocks=1 00:17:47.975 --rc geninfo_unexecuted_blocks=1 00:17:47.975 00:17:47.975 ' 00:17:47.975 21:50:04 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:47.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.975 --rc genhtml_branch_coverage=1 00:17:47.975 --rc genhtml_function_coverage=1 00:17:47.975 --rc genhtml_legend=1 00:17:47.975 --rc geninfo_all_blocks=1 00:17:47.975 --rc geninfo_unexecuted_blocks=1 00:17:47.975 00:17:47.975 ' 00:17:47.975 21:50:04 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:47.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.975 --rc genhtml_branch_coverage=1 00:17:47.975 --rc genhtml_function_coverage=1 00:17:47.975 --rc genhtml_legend=1 00:17:47.975 --rc geninfo_all_blocks=1 00:17:47.975 --rc geninfo_unexecuted_blocks=1 00:17:47.975 00:17:47.975 ' 00:17:47.975 21:50:04 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:47.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.975 --rc genhtml_branch_coverage=1 00:17:47.975 --rc genhtml_function_coverage=1 00:17:47.975 --rc genhtml_legend=1 00:17:47.975 --rc geninfo_all_blocks=1 00:17:47.975 --rc geninfo_unexecuted_blocks=1 00:17:47.975 00:17:47.975 ' 00:17:47.975 21:50:04 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:47.975 21:50:04 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:47.975 21:50:04 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:47.975 21:50:04 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:47.975 21:50:04 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:47.975 21:50:04 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:47.975 21:50:04 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:47.975 21:50:04 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:47.975 21:50:04 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:47.975 21:50:04 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:47.975 21:50:04 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:47.975 21:50:04 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:47.975 21:50:04 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:47.975 21:50:04 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:47.975 21:50:04 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:47.975 21:50:04 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:47.975 21:50:04 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:47.975 21:50:04 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:47.975 21:50:04 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:47.975 21:50:04 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:47.975 21:50:04 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:47.975 21:50:04 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:47.975 21:50:04 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:47.975 21:50:04 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:47.975 21:50:04 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:47.975 21:50:04 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:47.975 21:50:04 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:47.975 21:50:04 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:47.975 21:50:04 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:47.975 21:50:04 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:47.975 21:50:04 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:17:47.975 21:50:04 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:17:47.976 21:50:04 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:17:47.976 21:50:04 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:17:47.976 21:50:04 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:47.976 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:47.976 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:47.976 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:47.976 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:47.976 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:47.976 21:50:05 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=85887 00:17:47.976 21:50:05 ftl -- ftl/ftl.sh@38 -- # waitforlisten 85887 00:17:47.976 21:50:05 ftl -- common/autotest_common.sh@835 -- # '[' -z 85887 ']' 00:17:47.976 21:50:05 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.976 21:50:05 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:47.976 21:50:05 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.976 21:50:05 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:47.976 21:50:05 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:47.976 21:50:05 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:47.976 [2024-11-27 21:50:05.534527] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:17:47.976 [2024-11-27 21:50:05.534651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85887 ] 00:17:47.976 [2024-11-27 21:50:05.675939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.976 [2024-11-27 21:50:05.698934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.976 21:50:06 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:47.976 21:50:06 ftl -- common/autotest_common.sh@868 -- # return 0 00:17:47.976 21:50:06 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:17:47.976 21:50:06 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:47.976 21:50:06 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:17:47.976 21:50:06 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:47.976 21:50:07 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:17:47.976 21:50:07 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:47.976 21:50:07 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:47.976 21:50:07 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:17:47.976 21:50:07 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:17:47.976 21:50:07 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:17:47.976 21:50:07 ftl -- ftl/ftl.sh@50 -- # break 00:17:47.976 21:50:07 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:17:47.976 21:50:07 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:17:47.976 21:50:07 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:47.976 21:50:07 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:47.976 21:50:07 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:17:47.976 21:50:07 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:17:47.976 21:50:07 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:17:47.976 21:50:07 ftl -- ftl/ftl.sh@63 -- # break 00:17:47.976 21:50:07 ftl -- ftl/ftl.sh@66 -- # killprocess 85887 00:17:47.976 21:50:07 ftl -- common/autotest_common.sh@954 -- # '[' -z 85887 ']' 00:17:47.976 21:50:07 ftl -- common/autotest_common.sh@958 -- # kill -0 85887 00:17:47.976 21:50:07 ftl -- common/autotest_common.sh@959 -- # uname 00:17:47.976 21:50:07 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:47.976 21:50:07 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85887 00:17:47.976 21:50:07 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:47.976 killing process with pid 85887 00:17:47.976 21:50:07 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:47.976 21:50:07 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85887' 00:17:47.976 21:50:07 ftl -- common/autotest_common.sh@973 -- # kill 85887 00:17:47.976 21:50:07 ftl -- common/autotest_common.sh@978 -- # wait 85887 00:17:47.976 21:50:08 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:17:47.976 21:50:08 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:47.976 21:50:08 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:47.976 21:50:08 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:47.976 21:50:08 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:47.976 ************************************ 00:17:47.976 START TEST ftl_fio_basic 00:17:47.976 ************************************ 00:17:47.976 21:50:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:47.976 * Looking for test storage... 00:17:47.976 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:47.976 21:50:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:47.976 21:50:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:47.976 21:50:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:17:47.976 21:50:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:47.976 21:50:08 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:47.976 21:50:08 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:47.976 21:50:08 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:47.976 21:50:08 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:17:47.976 21:50:08 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:17:47.976 21:50:08 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:17:47.976 21:50:08 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:17:47.976 21:50:08 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:17:47.976 21:50:08 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:17:47.976 21:50:08 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:17:47.976 21:50:08 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:47.976 21:50:08 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:17:47.976 21:50:08 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:17:47.976 21:50:08 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:47.976 21:50:08 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:47.976 21:50:08 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:17:47.976 21:50:08 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:17:47.976 21:50:08 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:47.976 21:50:08 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:17:47.976 21:50:08 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:17:47.976 21:50:08 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:17:47.976 21:50:08 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:17:47.976 21:50:08 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:47.976 21:50:08 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:17:47.976 21:50:08 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:17:47.976 21:50:08 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:47.976 21:50:08 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:47.976 21:50:08 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:17:47.976 21:50:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:47.976 21:50:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:47.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.976 --rc genhtml_branch_coverage=1 00:17:47.976 --rc genhtml_function_coverage=1 00:17:47.976 --rc genhtml_legend=1 00:17:47.976 --rc geninfo_all_blocks=1 00:17:47.976 --rc geninfo_unexecuted_blocks=1 00:17:47.976 00:17:47.976 ' 00:17:47.976 21:50:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:47.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.976 --rc genhtml_branch_coverage=1 00:17:47.976 --rc genhtml_function_coverage=1 00:17:47.976 --rc genhtml_legend=1 00:17:47.976 --rc geninfo_all_blocks=1 00:17:47.976 --rc geninfo_unexecuted_blocks=1 00:17:47.976 00:17:47.976 ' 00:17:47.976 21:50:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:47.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.976 --rc genhtml_branch_coverage=1 00:17:47.976 --rc genhtml_function_coverage=1 00:17:47.976 --rc genhtml_legend=1 00:17:47.976 --rc geninfo_all_blocks=1 00:17:47.976 --rc geninfo_unexecuted_blocks=1 00:17:47.976 00:17:47.976 ' 00:17:47.976 21:50:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:47.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.976 --rc genhtml_branch_coverage=1 00:17:47.976 --rc genhtml_function_coverage=1 00:17:47.976 --rc genhtml_legend=1 00:17:47.976 --rc geninfo_all_blocks=1 00:17:47.976 --rc geninfo_unexecuted_blocks=1 00:17:47.976 00:17:47.977 ' 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=86009 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 86009 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 86009 ']' 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:47.977 21:50:08 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:17:47.977 [2024-11-27 21:50:08.358010] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:17:47.977 [2024-11-27 21:50:08.358131] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86009 ] 00:17:47.977 [2024-11-27 21:50:08.499966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:47.977 [2024-11-27 21:50:08.529942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.977 [2024-11-27 21:50:08.530227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.977 [2024-11-27 21:50:08.530296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:47.977 21:50:09 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:47.977 21:50:09 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:17:47.977 21:50:09 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:47.977 21:50:09 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:17:47.977 21:50:09 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:47.977 21:50:09 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:17:47.977 21:50:09 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:17:47.977 21:50:09 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:47.977 21:50:09 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:47.977 21:50:09 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:17:47.977 21:50:09 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:47.977 21:50:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:17:47.977 21:50:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:47.977 21:50:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:47.977 21:50:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:47.977 21:50:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:47.977 21:50:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:47.977 { 00:17:47.977 "name": "nvme0n1", 00:17:47.977 "aliases": [ 00:17:47.977 "4f6c419d-2fb8-4795-aedf-95e3a81b02c4" 00:17:47.977 ], 00:17:47.977 "product_name": "NVMe disk", 00:17:47.977 "block_size": 4096, 00:17:47.977 "num_blocks": 1310720, 00:17:47.977 "uuid": "4f6c419d-2fb8-4795-aedf-95e3a81b02c4", 00:17:47.977 "numa_id": -1, 00:17:47.977 "assigned_rate_limits": { 00:17:47.977 "rw_ios_per_sec": 0, 00:17:47.977 "rw_mbytes_per_sec": 0, 00:17:47.977 "r_mbytes_per_sec": 0, 00:17:47.977 "w_mbytes_per_sec": 0 00:17:47.977 }, 00:17:47.977 "claimed": false, 00:17:47.977 "zoned": false, 00:17:47.977 "supported_io_types": { 00:17:47.977 "read": true, 00:17:47.977 "write": true, 00:17:47.977 "unmap": true, 00:17:47.977 "flush": true, 00:17:47.977 "reset": true, 00:17:47.977 "nvme_admin": true, 00:17:47.977 "nvme_io": true, 00:17:47.977 "nvme_io_md": false, 00:17:47.977 "write_zeroes": true, 00:17:47.977 "zcopy": false, 00:17:47.977 "get_zone_info": false, 00:17:47.977 "zone_management": false, 00:17:47.977 "zone_append": false, 00:17:47.977 "compare": true, 00:17:47.977 "compare_and_write": false, 00:17:47.977 "abort": true, 00:17:47.977 "seek_hole": false, 00:17:47.977 "seek_data": false, 00:17:47.977 "copy": true, 00:17:47.977 "nvme_iov_md": false 00:17:47.977 }, 00:17:47.977 "driver_specific": { 00:17:47.977 "nvme": [ 00:17:47.977 { 00:17:47.977 "pci_address": "0000:00:11.0", 00:17:47.977 "trid": { 00:17:47.977 "trtype": "PCIe", 00:17:47.977 "traddr": "0000:00:11.0" 00:17:47.978 }, 00:17:47.978 "ctrlr_data": { 00:17:47.978 "cntlid": 0, 00:17:47.978 "vendor_id": "0x1b36", 00:17:47.978 "model_number": "QEMU NVMe Ctrl", 00:17:47.978 "serial_number": "12341", 00:17:47.978 "firmware_revision": "8.0.0", 00:17:47.978 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:47.978 "oacs": { 00:17:47.978 "security": 0, 00:17:47.978 "format": 1, 00:17:47.978 "firmware": 0, 00:17:47.978 "ns_manage": 1 00:17:47.978 }, 00:17:47.978 "multi_ctrlr": false, 00:17:47.978 "ana_reporting": false 00:17:47.978 }, 00:17:47.978 "vs": { 00:17:47.978 "nvme_version": "1.4" 00:17:47.978 }, 00:17:47.978 "ns_data": { 00:17:47.978 "id": 1, 00:17:47.978 "can_share": false 00:17:47.978 } 00:17:47.978 } 00:17:47.978 ], 00:17:47.978 "mp_policy": "active_passive" 00:17:47.978 } 00:17:47.978 } 00:17:47.978 ]' 00:17:47.978 21:50:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:47.978 21:50:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:47.978 21:50:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:47.978 21:50:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:17:47.978 21:50:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:17:47.978 21:50:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:17:47.978 21:50:09 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:17:47.978 21:50:09 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:47.978 21:50:09 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:17:47.978 21:50:09 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:47.978 21:50:09 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:47.978 21:50:09 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:17:47.978 21:50:09 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:47.978 21:50:10 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=cd0add2f-c8f0-44ca-9f19-707150f344a4 00:17:47.978 21:50:10 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u cd0add2f-c8f0-44ca-9f19-707150f344a4 00:17:47.978 21:50:10 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=24a825c9-0532-44ed-9aff-be089b38cc53 00:17:47.978 21:50:10 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 24a825c9-0532-44ed-9aff-be089b38cc53 00:17:47.978 21:50:10 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:17:47.978 21:50:10 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:47.978 21:50:10 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=24a825c9-0532-44ed-9aff-be089b38cc53 00:17:47.978 21:50:10 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:17:47.978 21:50:10 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 24a825c9-0532-44ed-9aff-be089b38cc53 00:17:47.978 21:50:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=24a825c9-0532-44ed-9aff-be089b38cc53 00:17:47.978 21:50:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:47.978 21:50:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:47.978 21:50:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:47.978 21:50:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 24a825c9-0532-44ed-9aff-be089b38cc53 00:17:47.978 21:50:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:47.978 { 00:17:47.978 "name": "24a825c9-0532-44ed-9aff-be089b38cc53", 00:17:47.978 "aliases": [ 00:17:47.978 "lvs/nvme0n1p0" 00:17:47.978 ], 00:17:47.978 "product_name": "Logical Volume", 00:17:47.978 "block_size": 4096, 00:17:47.978 "num_blocks": 26476544, 00:17:47.978 "uuid": "24a825c9-0532-44ed-9aff-be089b38cc53", 00:17:47.978 "assigned_rate_limits": { 00:17:47.978 "rw_ios_per_sec": 0, 00:17:47.978 "rw_mbytes_per_sec": 0, 00:17:47.978 "r_mbytes_per_sec": 0, 00:17:47.978 "w_mbytes_per_sec": 0 00:17:47.978 }, 00:17:47.978 "claimed": false, 00:17:47.978 "zoned": false, 00:17:47.978 "supported_io_types": { 00:17:47.978 "read": true, 00:17:47.978 "write": true, 00:17:47.978 "unmap": true, 00:17:47.978 "flush": false, 00:17:47.978 "reset": true, 00:17:47.978 "nvme_admin": false, 00:17:47.978 "nvme_io": false, 00:17:47.978 "nvme_io_md": false, 00:17:47.978 "write_zeroes": true, 00:17:47.978 "zcopy": false, 00:17:47.978 "get_zone_info": false, 00:17:47.978 "zone_management": false, 00:17:47.978 "zone_append": false, 00:17:47.978 "compare": false, 00:17:47.978 "compare_and_write": false, 00:17:47.978 "abort": false, 00:17:47.978 "seek_hole": true, 00:17:47.978 "seek_data": true, 00:17:47.978 "copy": false, 00:17:47.978 "nvme_iov_md": false 00:17:47.978 }, 00:17:47.978 "driver_specific": { 00:17:47.978 "lvol": { 00:17:47.978 "lvol_store_uuid": "cd0add2f-c8f0-44ca-9f19-707150f344a4", 00:17:47.978 "base_bdev": "nvme0n1", 00:17:47.978 "thin_provision": true, 00:17:47.978 "num_allocated_clusters": 0, 00:17:47.978 "snapshot": false, 00:17:47.978 "clone": false, 00:17:47.978 "esnap_clone": false 00:17:47.978 } 00:17:47.978 } 00:17:47.978 } 00:17:47.978 ]' 00:17:47.978 21:50:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:47.978 21:50:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:47.978 21:50:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:47.978 21:50:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:47.978 21:50:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:47.978 21:50:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:47.978 21:50:10 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:17:47.978 21:50:10 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:17:47.978 21:50:10 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:47.978 21:50:10 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:47.978 21:50:10 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:47.978 21:50:10 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 24a825c9-0532-44ed-9aff-be089b38cc53 00:17:47.978 21:50:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=24a825c9-0532-44ed-9aff-be089b38cc53 00:17:47.978 21:50:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:47.978 21:50:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:47.978 21:50:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:47.978 21:50:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 24a825c9-0532-44ed-9aff-be089b38cc53 00:17:47.978 21:50:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:47.978 { 00:17:47.978 "name": "24a825c9-0532-44ed-9aff-be089b38cc53", 00:17:47.978 "aliases": [ 00:17:47.978 "lvs/nvme0n1p0" 00:17:47.978 ], 00:17:47.978 "product_name": "Logical Volume", 00:17:47.978 "block_size": 4096, 00:17:47.978 "num_blocks": 26476544, 00:17:47.978 "uuid": "24a825c9-0532-44ed-9aff-be089b38cc53", 00:17:47.978 "assigned_rate_limits": { 00:17:47.978 "rw_ios_per_sec": 0, 00:17:47.978 "rw_mbytes_per_sec": 0, 00:17:47.978 "r_mbytes_per_sec": 0, 00:17:47.978 "w_mbytes_per_sec": 0 00:17:47.978 }, 00:17:47.978 "claimed": false, 00:17:47.978 "zoned": false, 00:17:47.978 "supported_io_types": { 00:17:47.978 "read": true, 00:17:47.978 "write": true, 00:17:47.978 "unmap": true, 00:17:47.978 "flush": false, 00:17:47.978 "reset": true, 00:17:47.978 "nvme_admin": false, 00:17:47.978 "nvme_io": false, 00:17:47.978 "nvme_io_md": false, 00:17:47.978 "write_zeroes": true, 00:17:47.978 "zcopy": false, 00:17:47.978 "get_zone_info": false, 00:17:47.978 "zone_management": false, 00:17:47.978 "zone_append": false, 00:17:47.978 "compare": false, 00:17:47.978 "compare_and_write": false, 00:17:47.978 "abort": false, 00:17:47.978 "seek_hole": true, 00:17:47.978 "seek_data": true, 00:17:47.978 "copy": false, 00:17:47.978 "nvme_iov_md": false 00:17:47.978 }, 00:17:47.978 "driver_specific": { 00:17:47.978 "lvol": { 00:17:47.978 "lvol_store_uuid": "cd0add2f-c8f0-44ca-9f19-707150f344a4", 00:17:47.978 "base_bdev": "nvme0n1", 00:17:47.978 "thin_provision": true, 00:17:47.978 "num_allocated_clusters": 0, 00:17:47.978 "snapshot": false, 00:17:47.978 "clone": false, 00:17:47.978 "esnap_clone": false 00:17:47.979 } 00:17:47.979 } 00:17:47.979 } 00:17:47.979 ]' 00:17:47.979 21:50:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:47.979 21:50:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:47.979 21:50:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:48.240 21:50:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:48.240 21:50:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:48.240 21:50:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:48.240 21:50:11 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:17:48.240 21:50:11 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:48.240 21:50:11 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:17:48.240 21:50:11 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:17:48.240 21:50:11 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:17:48.240 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:17:48.240 21:50:11 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 24a825c9-0532-44ed-9aff-be089b38cc53 00:17:48.240 21:50:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=24a825c9-0532-44ed-9aff-be089b38cc53 00:17:48.240 21:50:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:48.240 21:50:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:48.240 21:50:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:48.240 21:50:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 24a825c9-0532-44ed-9aff-be089b38cc53 00:17:48.498 21:50:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:48.498 { 00:17:48.498 "name": "24a825c9-0532-44ed-9aff-be089b38cc53", 00:17:48.498 "aliases": [ 00:17:48.498 "lvs/nvme0n1p0" 00:17:48.498 ], 00:17:48.498 "product_name": "Logical Volume", 00:17:48.498 "block_size": 4096, 00:17:48.498 "num_blocks": 26476544, 00:17:48.498 "uuid": "24a825c9-0532-44ed-9aff-be089b38cc53", 00:17:48.498 "assigned_rate_limits": { 00:17:48.498 "rw_ios_per_sec": 0, 00:17:48.498 "rw_mbytes_per_sec": 0, 00:17:48.498 "r_mbytes_per_sec": 0, 00:17:48.498 "w_mbytes_per_sec": 0 00:17:48.498 }, 00:17:48.498 "claimed": false, 00:17:48.498 "zoned": false, 00:17:48.498 "supported_io_types": { 00:17:48.498 "read": true, 00:17:48.498 "write": true, 00:17:48.498 "unmap": true, 00:17:48.498 "flush": false, 00:17:48.498 "reset": true, 00:17:48.498 "nvme_admin": false, 00:17:48.498 "nvme_io": false, 00:17:48.498 "nvme_io_md": false, 00:17:48.498 "write_zeroes": true, 00:17:48.498 "zcopy": false, 00:17:48.498 "get_zone_info": false, 00:17:48.498 "zone_management": false, 00:17:48.498 "zone_append": false, 00:17:48.498 "compare": false, 00:17:48.498 "compare_and_write": false, 00:17:48.498 "abort": false, 00:17:48.498 "seek_hole": true, 00:17:48.498 "seek_data": true, 00:17:48.498 "copy": false, 00:17:48.498 "nvme_iov_md": false 00:17:48.498 }, 00:17:48.498 "driver_specific": { 00:17:48.498 "lvol": { 00:17:48.498 "lvol_store_uuid": "cd0add2f-c8f0-44ca-9f19-707150f344a4", 00:17:48.498 "base_bdev": "nvme0n1", 00:17:48.498 "thin_provision": true, 00:17:48.498 "num_allocated_clusters": 0, 00:17:48.498 "snapshot": false, 00:17:48.498 "clone": false, 00:17:48.498 "esnap_clone": false 00:17:48.498 } 00:17:48.498 } 00:17:48.498 } 00:17:48.498 ]' 00:17:48.498 21:50:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:48.498 21:50:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:48.498 21:50:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:48.498 21:50:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:48.498 21:50:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:48.498 21:50:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:48.498 21:50:11 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:17:48.498 21:50:11 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:17:48.498 21:50:11 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 24a825c9-0532-44ed-9aff-be089b38cc53 -c nvc0n1p0 --l2p_dram_limit 60 00:17:48.757 [2024-11-27 21:50:11.757387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.757 [2024-11-27 21:50:11.757436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:48.757 [2024-11-27 21:50:11.757456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:48.757 [2024-11-27 21:50:11.757465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.757 [2024-11-27 21:50:11.757534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.757 [2024-11-27 21:50:11.757543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:48.757 [2024-11-27 21:50:11.757550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:48.757 [2024-11-27 21:50:11.757560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.757 [2024-11-27 21:50:11.757582] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:48.757 [2024-11-27 21:50:11.757792] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:48.757 [2024-11-27 21:50:11.757804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.757 [2024-11-27 21:50:11.757819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:48.757 [2024-11-27 21:50:11.757826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:17:48.758 [2024-11-27 21:50:11.757835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.758 [2024-11-27 21:50:11.757868] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 6764456c-25b0-4aa3-92f2-272d9ac41ae1 00:17:48.758 [2024-11-27 21:50:11.759156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.758 [2024-11-27 21:50:11.759175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:48.758 [2024-11-27 21:50:11.759186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:17:48.758 [2024-11-27 21:50:11.759193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.758 [2024-11-27 21:50:11.766008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.758 [2024-11-27 21:50:11.766031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:48.758 [2024-11-27 21:50:11.766044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.734 ms 00:17:48.758 [2024-11-27 21:50:11.766050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.758 [2024-11-27 21:50:11.766139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.758 [2024-11-27 21:50:11.766146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:48.758 [2024-11-27 21:50:11.766156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:17:48.758 [2024-11-27 21:50:11.766162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.758 [2024-11-27 21:50:11.766214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.758 [2024-11-27 21:50:11.766222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:48.758 [2024-11-27 21:50:11.766240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:48.758 [2024-11-27 21:50:11.766248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.758 [2024-11-27 21:50:11.766278] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:48.758 [2024-11-27 21:50:11.767919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.758 [2024-11-27 21:50:11.767941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:48.758 [2024-11-27 21:50:11.767948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.646 ms 00:17:48.758 [2024-11-27 21:50:11.767964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.758 [2024-11-27 21:50:11.768001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.758 [2024-11-27 21:50:11.768009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:48.758 [2024-11-27 21:50:11.768016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:48.758 [2024-11-27 21:50:11.768028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.758 [2024-11-27 21:50:11.768051] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:48.758 [2024-11-27 21:50:11.768165] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:48.758 [2024-11-27 21:50:11.768188] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:48.758 [2024-11-27 21:50:11.768207] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:48.758 [2024-11-27 21:50:11.768221] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:48.758 [2024-11-27 21:50:11.768232] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:48.758 [2024-11-27 21:50:11.768238] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:48.758 [2024-11-27 21:50:11.768246] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:48.758 [2024-11-27 21:50:11.768251] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:48.758 [2024-11-27 21:50:11.768259] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:48.758 [2024-11-27 21:50:11.768266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.758 [2024-11-27 21:50:11.768273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:48.758 [2024-11-27 21:50:11.768280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:17:48.758 [2024-11-27 21:50:11.768300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.758 [2024-11-27 21:50:11.768385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.758 [2024-11-27 21:50:11.768402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:48.758 [2024-11-27 21:50:11.768409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:17:48.758 [2024-11-27 21:50:11.768416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.758 [2024-11-27 21:50:11.768517] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:48.758 [2024-11-27 21:50:11.768531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:48.758 [2024-11-27 21:50:11.768538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:48.758 [2024-11-27 21:50:11.768545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:48.758 [2024-11-27 21:50:11.768552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:48.758 [2024-11-27 21:50:11.768558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:48.758 [2024-11-27 21:50:11.768564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:48.758 [2024-11-27 21:50:11.768571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:48.758 [2024-11-27 21:50:11.768576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:48.758 [2024-11-27 21:50:11.768584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:48.758 [2024-11-27 21:50:11.768590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:48.758 [2024-11-27 21:50:11.768601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:48.758 [2024-11-27 21:50:11.768606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:48.758 [2024-11-27 21:50:11.768614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:48.758 [2024-11-27 21:50:11.768619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:48.758 [2024-11-27 21:50:11.768626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:48.758 [2024-11-27 21:50:11.768632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:48.758 [2024-11-27 21:50:11.768639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:48.758 [2024-11-27 21:50:11.768646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:48.758 [2024-11-27 21:50:11.768653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:48.758 [2024-11-27 21:50:11.768658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:48.758 [2024-11-27 21:50:11.768664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:48.758 [2024-11-27 21:50:11.768670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:48.758 [2024-11-27 21:50:11.768676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:48.758 [2024-11-27 21:50:11.768681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:48.758 [2024-11-27 21:50:11.768688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:48.758 [2024-11-27 21:50:11.768693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:48.758 [2024-11-27 21:50:11.768700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:48.758 [2024-11-27 21:50:11.768705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:48.758 [2024-11-27 21:50:11.768714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:48.758 [2024-11-27 21:50:11.768719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:48.758 [2024-11-27 21:50:11.768726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:48.758 [2024-11-27 21:50:11.768731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:48.758 [2024-11-27 21:50:11.768738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:48.758 [2024-11-27 21:50:11.768743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:48.758 [2024-11-27 21:50:11.768750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:48.758 [2024-11-27 21:50:11.768755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:48.758 [2024-11-27 21:50:11.768761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:48.758 [2024-11-27 21:50:11.768766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:48.758 [2024-11-27 21:50:11.768773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:48.758 [2024-11-27 21:50:11.768777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:48.759 [2024-11-27 21:50:11.768784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:48.759 [2024-11-27 21:50:11.768789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:48.759 [2024-11-27 21:50:11.768797] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:48.759 [2024-11-27 21:50:11.768803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:48.759 [2024-11-27 21:50:11.768815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:48.759 [2024-11-27 21:50:11.768820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:48.759 [2024-11-27 21:50:11.768837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:48.759 [2024-11-27 21:50:11.768842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:48.759 [2024-11-27 21:50:11.768848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:48.759 [2024-11-27 21:50:11.768853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:48.759 [2024-11-27 21:50:11.768860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:48.759 [2024-11-27 21:50:11.768865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:48.759 [2024-11-27 21:50:11.768875] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:48.759 [2024-11-27 21:50:11.768886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:48.759 [2024-11-27 21:50:11.768895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:48.759 [2024-11-27 21:50:11.768900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:48.759 [2024-11-27 21:50:11.768908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:48.759 [2024-11-27 21:50:11.768913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:48.759 [2024-11-27 21:50:11.768920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:48.759 [2024-11-27 21:50:11.768925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:48.759 [2024-11-27 21:50:11.768934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:48.759 [2024-11-27 21:50:11.768939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:48.759 [2024-11-27 21:50:11.768946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:48.759 [2024-11-27 21:50:11.768951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:48.759 [2024-11-27 21:50:11.768957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:48.759 [2024-11-27 21:50:11.768963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:48.759 [2024-11-27 21:50:11.768970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:48.759 [2024-11-27 21:50:11.768975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:48.759 [2024-11-27 21:50:11.768982] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:48.759 [2024-11-27 21:50:11.768988] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:48.759 [2024-11-27 21:50:11.768996] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:48.759 [2024-11-27 21:50:11.769001] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:48.759 [2024-11-27 21:50:11.769008] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:48.759 [2024-11-27 21:50:11.769013] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:48.759 [2024-11-27 21:50:11.769035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.759 [2024-11-27 21:50:11.769041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:48.759 [2024-11-27 21:50:11.769050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.568 ms 00:17:48.759 [2024-11-27 21:50:11.769065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.759 [2024-11-27 21:50:11.769130] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:48.759 [2024-11-27 21:50:11.769138] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:51.287 [2024-11-27 21:50:13.945250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.287 [2024-11-27 21:50:13.945291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:51.287 [2024-11-27 21:50:13.945306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2176.110 ms 00:17:51.287 [2024-11-27 21:50:13.945315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.287 [2024-11-27 21:50:13.956007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.287 [2024-11-27 21:50:13.956050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:51.287 [2024-11-27 21:50:13.956066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.582 ms 00:17:51.287 [2024-11-27 21:50:13.956075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.287 [2024-11-27 21:50:13.956207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.287 [2024-11-27 21:50:13.956227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:51.288 [2024-11-27 21:50:13.956247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:51.288 [2024-11-27 21:50:13.956255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.288 [2024-11-27 21:50:13.984181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.288 [2024-11-27 21:50:13.984264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:51.288 [2024-11-27 21:50:13.984299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.857 ms 00:17:51.288 [2024-11-27 21:50:13.984321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.288 [2024-11-27 21:50:13.984460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.288 [2024-11-27 21:50:13.984485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:51.288 [2024-11-27 21:50:13.984513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:51.288 [2024-11-27 21:50:13.984532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.288 [2024-11-27 21:50:13.985209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.288 [2024-11-27 21:50:13.985269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:51.288 [2024-11-27 21:50:13.985324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:17:51.288 [2024-11-27 21:50:13.985373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.288 [2024-11-27 21:50:13.985680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.288 [2024-11-27 21:50:13.985715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:51.288 [2024-11-27 21:50:13.985746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:17:51.288 [2024-11-27 21:50:13.985767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.288 [2024-11-27 21:50:13.992791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.288 [2024-11-27 21:50:13.992815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:51.288 [2024-11-27 21:50:13.992827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.914 ms 00:17:51.288 [2024-11-27 21:50:13.992837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.288 [2024-11-27 21:50:14.001774] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:51.288 [2024-11-27 21:50:14.019105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.288 [2024-11-27 21:50:14.019135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:51.288 [2024-11-27 21:50:14.019147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.178 ms 00:17:51.288 [2024-11-27 21:50:14.019156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.288 [2024-11-27 21:50:14.057126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.288 [2024-11-27 21:50:14.057159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:51.288 [2024-11-27 21:50:14.057170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.934 ms 00:17:51.288 [2024-11-27 21:50:14.057182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.288 [2024-11-27 21:50:14.057375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.288 [2024-11-27 21:50:14.057387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:51.288 [2024-11-27 21:50:14.057396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:17:51.288 [2024-11-27 21:50:14.057405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.288 [2024-11-27 21:50:14.060355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.288 [2024-11-27 21:50:14.060384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:51.288 [2024-11-27 21:50:14.060394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.918 ms 00:17:51.288 [2024-11-27 21:50:14.060404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.288 [2024-11-27 21:50:14.062673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.288 [2024-11-27 21:50:14.062701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:51.288 [2024-11-27 21:50:14.062712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.238 ms 00:17:51.288 [2024-11-27 21:50:14.062721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.288 [2024-11-27 21:50:14.063020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.288 [2024-11-27 21:50:14.063038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:51.288 [2024-11-27 21:50:14.063056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:17:51.288 [2024-11-27 21:50:14.063068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.288 [2024-11-27 21:50:14.094304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.288 [2024-11-27 21:50:14.094352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:51.288 [2024-11-27 21:50:14.094364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.196 ms 00:17:51.288 [2024-11-27 21:50:14.094375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.288 [2024-11-27 21:50:14.098394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.288 [2024-11-27 21:50:14.098424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:51.288 [2024-11-27 21:50:14.098435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.948 ms 00:17:51.288 [2024-11-27 21:50:14.098446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.288 [2024-11-27 21:50:14.101127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.288 [2024-11-27 21:50:14.101155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:51.288 [2024-11-27 21:50:14.101164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.636 ms 00:17:51.288 [2024-11-27 21:50:14.101173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.288 [2024-11-27 21:50:14.104597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.288 [2024-11-27 21:50:14.104628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:51.288 [2024-11-27 21:50:14.104639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.382 ms 00:17:51.288 [2024-11-27 21:50:14.104651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.288 [2024-11-27 21:50:14.104704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.288 [2024-11-27 21:50:14.104717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:51.288 [2024-11-27 21:50:14.104727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:51.288 [2024-11-27 21:50:14.104737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.288 [2024-11-27 21:50:14.104809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.288 [2024-11-27 21:50:14.104829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:51.288 [2024-11-27 21:50:14.104838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:51.288 [2024-11-27 21:50:14.104848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.288 [2024-11-27 21:50:14.105906] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2348.018 ms, result 0 00:17:51.288 { 00:17:51.288 "name": "ftl0", 00:17:51.288 "uuid": "6764456c-25b0-4aa3-92f2-272d9ac41ae1" 00:17:51.288 } 00:17:51.288 21:50:14 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:17:51.288 21:50:14 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:17:51.288 21:50:14 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:51.288 21:50:14 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:17:51.288 21:50:14 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:51.288 21:50:14 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:51.288 21:50:14 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:51.288 21:50:14 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:51.545 [ 00:17:51.545 { 00:17:51.545 "name": "ftl0", 00:17:51.545 "aliases": [ 00:17:51.545 "6764456c-25b0-4aa3-92f2-272d9ac41ae1" 00:17:51.545 ], 00:17:51.545 "product_name": "FTL disk", 00:17:51.545 "block_size": 4096, 00:17:51.545 "num_blocks": 20971520, 00:17:51.545 "uuid": "6764456c-25b0-4aa3-92f2-272d9ac41ae1", 00:17:51.545 "assigned_rate_limits": { 00:17:51.545 "rw_ios_per_sec": 0, 00:17:51.545 "rw_mbytes_per_sec": 0, 00:17:51.545 "r_mbytes_per_sec": 0, 00:17:51.545 "w_mbytes_per_sec": 0 00:17:51.545 }, 00:17:51.545 "claimed": false, 00:17:51.545 "zoned": false, 00:17:51.545 "supported_io_types": { 00:17:51.545 "read": true, 00:17:51.545 "write": true, 00:17:51.545 "unmap": true, 00:17:51.545 "flush": true, 00:17:51.545 "reset": false, 00:17:51.545 "nvme_admin": false, 00:17:51.545 "nvme_io": false, 00:17:51.546 "nvme_io_md": false, 00:17:51.546 "write_zeroes": true, 00:17:51.546 "zcopy": false, 00:17:51.546 "get_zone_info": false, 00:17:51.546 "zone_management": false, 00:17:51.546 "zone_append": false, 00:17:51.546 "compare": false, 00:17:51.546 "compare_and_write": false, 00:17:51.546 "abort": false, 00:17:51.546 "seek_hole": false, 00:17:51.546 "seek_data": false, 00:17:51.546 "copy": false, 00:17:51.546 "nvme_iov_md": false 00:17:51.546 }, 00:17:51.546 "driver_specific": { 00:17:51.546 "ftl": { 00:17:51.546 "base_bdev": "24a825c9-0532-44ed-9aff-be089b38cc53", 00:17:51.546 "cache": "nvc0n1p0" 00:17:51.546 } 00:17:51.546 } 00:17:51.546 } 00:17:51.546 ] 00:17:51.546 21:50:14 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:17:51.546 21:50:14 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:17:51.546 21:50:14 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:51.805 21:50:14 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:17:51.805 21:50:14 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:51.805 [2024-11-27 21:50:14.900504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.805 [2024-11-27 21:50:14.900532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:51.805 [2024-11-27 21:50:14.900542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:51.805 [2024-11-27 21:50:14.900549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.805 [2024-11-27 21:50:14.900574] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:51.805 [2024-11-27 21:50:14.901107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.805 [2024-11-27 21:50:14.901132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:51.805 [2024-11-27 21:50:14.901140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.520 ms 00:17:51.805 [2024-11-27 21:50:14.901148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.805 [2024-11-27 21:50:14.901554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.805 [2024-11-27 21:50:14.901579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:51.805 [2024-11-27 21:50:14.901595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.386 ms 00:17:51.805 [2024-11-27 21:50:14.901613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.805 [2024-11-27 21:50:14.904019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.805 [2024-11-27 21:50:14.904037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:51.805 [2024-11-27 21:50:14.904045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.388 ms 00:17:51.805 [2024-11-27 21:50:14.904055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.805 [2024-11-27 21:50:14.908839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.805 [2024-11-27 21:50:14.908859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:51.805 [2024-11-27 21:50:14.908868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.756 ms 00:17:51.805 [2024-11-27 21:50:14.908876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.805 [2024-11-27 21:50:14.910462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.805 [2024-11-27 21:50:14.910491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:51.805 [2024-11-27 21:50:14.910498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.431 ms 00:17:51.805 [2024-11-27 21:50:14.910505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.805 [2024-11-27 21:50:14.914679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.805 [2024-11-27 21:50:14.914709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:51.805 [2024-11-27 21:50:14.914716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.141 ms 00:17:51.805 [2024-11-27 21:50:14.914724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.805 [2024-11-27 21:50:14.914874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.805 [2024-11-27 21:50:14.914885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:51.805 [2024-11-27 21:50:14.914892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:17:51.805 [2024-11-27 21:50:14.914903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.805 [2024-11-27 21:50:14.916516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.805 [2024-11-27 21:50:14.916541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:51.805 [2024-11-27 21:50:14.916548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.589 ms 00:17:51.805 [2024-11-27 21:50:14.916555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.805 [2024-11-27 21:50:14.917843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.805 [2024-11-27 21:50:14.917869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:51.805 [2024-11-27 21:50:14.917876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.257 ms 00:17:51.805 [2024-11-27 21:50:14.917883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.805 [2024-11-27 21:50:14.918997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.805 [2024-11-27 21:50:14.919022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:51.805 [2024-11-27 21:50:14.919029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.082 ms 00:17:51.805 [2024-11-27 21:50:14.919036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.805 [2024-11-27 21:50:14.919825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.805 [2024-11-27 21:50:14.919850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:51.805 [2024-11-27 21:50:14.919857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.709 ms 00:17:51.805 [2024-11-27 21:50:14.919864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.805 [2024-11-27 21:50:14.919896] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:51.805 [2024-11-27 21:50:14.919909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:51.805 [2024-11-27 21:50:14.919917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:51.805 [2024-11-27 21:50:14.919926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:51.805 [2024-11-27 21:50:14.919932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:51.805 [2024-11-27 21:50:14.919942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:51.805 [2024-11-27 21:50:14.919948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:51.805 [2024-11-27 21:50:14.919957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:51.805 [2024-11-27 21:50:14.919963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:51.805 [2024-11-27 21:50:14.919971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:51.805 [2024-11-27 21:50:14.919977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:51.805 [2024-11-27 21:50:14.919985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:51.805 [2024-11-27 21:50:14.919991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:51.805 [2024-11-27 21:50:14.919999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:51.805 [2024-11-27 21:50:14.920005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:51.805 [2024-11-27 21:50:14.920013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:51.805 [2024-11-27 21:50:14.920019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:51.805 [2024-11-27 21:50:14.920027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:51.805 [2024-11-27 21:50:14.920045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:51.805 [2024-11-27 21:50:14.920053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:51.805 [2024-11-27 21:50:14.920059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:51.806 [2024-11-27 21:50:14.920552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:51.807 [2024-11-27 21:50:14.920557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:51.807 [2024-11-27 21:50:14.920564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:51.807 [2024-11-27 21:50:14.920570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:51.807 [2024-11-27 21:50:14.920578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:51.807 [2024-11-27 21:50:14.920584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:51.807 [2024-11-27 21:50:14.920592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:51.807 [2024-11-27 21:50:14.920597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:51.807 [2024-11-27 21:50:14.920604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:51.807 [2024-11-27 21:50:14.920612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:51.807 [2024-11-27 21:50:14.920619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:51.807 [2024-11-27 21:50:14.920626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:51.807 [2024-11-27 21:50:14.920641] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:51.807 [2024-11-27 21:50:14.920649] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6764456c-25b0-4aa3-92f2-272d9ac41ae1 00:17:51.807 [2024-11-27 21:50:14.920657] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:51.807 [2024-11-27 21:50:14.920672] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:51.807 [2024-11-27 21:50:14.920679] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:51.807 [2024-11-27 21:50:14.920685] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:51.807 [2024-11-27 21:50:14.920693] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:51.807 [2024-11-27 21:50:14.920699] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:51.807 [2024-11-27 21:50:14.920707] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:51.807 [2024-11-27 21:50:14.920712] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:51.807 [2024-11-27 21:50:14.920719] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:51.807 [2024-11-27 21:50:14.920725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.807 [2024-11-27 21:50:14.920733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:51.807 [2024-11-27 21:50:14.920739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.830 ms 00:17:51.807 [2024-11-27 21:50:14.920747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.807 [2024-11-27 21:50:14.922194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.807 [2024-11-27 21:50:14.922214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:51.807 [2024-11-27 21:50:14.922222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.417 ms 00:17:51.807 [2024-11-27 21:50:14.922230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.807 [2024-11-27 21:50:14.922322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.807 [2024-11-27 21:50:14.922332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:51.807 [2024-11-27 21:50:14.922353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:51.807 [2024-11-27 21:50:14.922368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.065 [2024-11-27 21:50:14.928145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.065 [2024-11-27 21:50:14.928172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:52.065 [2024-11-27 21:50:14.928179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.065 [2024-11-27 21:50:14.928189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.065 [2024-11-27 21:50:14.928243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.065 [2024-11-27 21:50:14.928252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:52.065 [2024-11-27 21:50:14.928261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.065 [2024-11-27 21:50:14.928277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.065 [2024-11-27 21:50:14.928329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.065 [2024-11-27 21:50:14.928352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:52.065 [2024-11-27 21:50:14.928360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.065 [2024-11-27 21:50:14.928367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.065 [2024-11-27 21:50:14.928387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.065 [2024-11-27 21:50:14.928396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:52.065 [2024-11-27 21:50:14.928403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.065 [2024-11-27 21:50:14.928410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.065 [2024-11-27 21:50:14.939534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.065 [2024-11-27 21:50:14.939567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:52.065 [2024-11-27 21:50:14.939586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.065 [2024-11-27 21:50:14.939594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.065 [2024-11-27 21:50:14.948483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.065 [2024-11-27 21:50:14.948515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:52.065 [2024-11-27 21:50:14.948535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.065 [2024-11-27 21:50:14.948546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.065 [2024-11-27 21:50:14.948610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.066 [2024-11-27 21:50:14.948623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:52.066 [2024-11-27 21:50:14.948630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.066 [2024-11-27 21:50:14.948649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.066 [2024-11-27 21:50:14.948718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.066 [2024-11-27 21:50:14.948729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:52.066 [2024-11-27 21:50:14.948736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.066 [2024-11-27 21:50:14.948744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.066 [2024-11-27 21:50:14.948821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.066 [2024-11-27 21:50:14.948830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:52.066 [2024-11-27 21:50:14.948836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.066 [2024-11-27 21:50:14.948843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.066 [2024-11-27 21:50:14.948888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.066 [2024-11-27 21:50:14.948897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:52.066 [2024-11-27 21:50:14.948904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.066 [2024-11-27 21:50:14.948911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.066 [2024-11-27 21:50:14.948966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.066 [2024-11-27 21:50:14.948984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:52.066 [2024-11-27 21:50:14.948991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.066 [2024-11-27 21:50:14.948998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.066 [2024-11-27 21:50:14.949054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.066 [2024-11-27 21:50:14.949065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:52.066 [2024-11-27 21:50:14.949071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.066 [2024-11-27 21:50:14.949079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.066 [2024-11-27 21:50:14.949249] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 48.686 ms, result 0 00:17:52.066 true 00:17:52.066 21:50:14 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 86009 00:17:52.066 21:50:14 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 86009 ']' 00:17:52.066 21:50:14 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 86009 00:17:52.066 21:50:14 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:17:52.066 21:50:14 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:52.066 21:50:14 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86009 00:17:52.066 21:50:14 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:52.066 21:50:14 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:52.066 killing process with pid 86009 00:17:52.066 21:50:14 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86009' 00:17:52.066 21:50:14 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 86009 00:17:52.066 21:50:14 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 86009 00:17:57.346 21:50:20 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:17:57.346 21:50:20 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:57.346 21:50:20 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:17:57.346 21:50:20 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:57.346 21:50:20 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:57.346 21:50:20 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:57.346 21:50:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:57.346 21:50:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:57.346 21:50:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:57.346 21:50:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:57.346 21:50:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:57.346 21:50:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:17:57.346 21:50:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:57.346 21:50:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:57.346 21:50:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:17:57.346 21:50:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:57.346 21:50:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:57.346 21:50:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:57.346 21:50:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:57.346 21:50:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:17:57.346 21:50:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:57.346 21:50:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:57.346 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:17:57.346 fio-3.35 00:17:57.346 Starting 1 thread 00:18:02.637 00:18:02.637 test: (groupid=0, jobs=1): err= 0: pid=86166: Wed Nov 27 21:50:24 2024 00:18:02.637 read: IOPS=949, BW=63.0MiB/s (66.1MB/s)(255MiB/4037msec) 00:18:02.637 slat (nsec): min=4086, max=40869, avg=6058.96, stdev=2514.41 00:18:02.637 clat (usec): min=259, max=1220, avg=473.43, stdev=138.41 00:18:02.637 lat (usec): min=264, max=1225, avg=479.49, stdev=138.87 00:18:02.637 clat percentiles (usec): 00:18:02.637 | 1.00th=[ 293], 5.00th=[ 302], 10.00th=[ 306], 20.00th=[ 318], 00:18:02.637 | 30.00th=[ 412], 40.00th=[ 469], 50.00th=[ 482], 60.00th=[ 490], 00:18:02.637 | 70.00th=[ 515], 80.00th=[ 553], 90.00th=[ 603], 95.00th=[ 783], 00:18:02.637 | 99.00th=[ 938], 99.50th=[ 988], 99.90th=[ 1106], 99.95th=[ 1172], 00:18:02.637 | 99.99th=[ 1221] 00:18:02.637 write: IOPS=956, BW=63.5MiB/s (66.6MB/s)(256MiB/4033msec); 0 zone resets 00:18:02.637 slat (usec): min=14, max=176, avg=23.20, stdev= 6.98 00:18:02.637 clat (usec): min=291, max=1284, avg=534.75, stdev=153.94 00:18:02.637 lat (usec): min=316, max=1310, avg=557.95, stdev=153.72 00:18:02.637 clat percentiles (usec): 00:18:02.637 | 1.00th=[ 310], 5.00th=[ 318], 10.00th=[ 322], 20.00th=[ 334], 00:18:02.637 | 30.00th=[ 498], 40.00th=[ 553], 50.00th=[ 570], 60.00th=[ 578], 00:18:02.637 | 70.00th=[ 586], 80.00th=[ 619], 90.00th=[ 668], 95.00th=[ 848], 00:18:02.637 | 99.00th=[ 971], 99.50th=[ 1037], 99.90th=[ 1221], 99.95th=[ 1254], 00:18:02.637 | 99.99th=[ 1287] 00:18:02.637 bw ( KiB/s): min=56032, max=96288, per=100.00%, avg=65127.00, stdev=14664.31, samples=8 00:18:02.637 iops : min= 824, max= 1416, avg=957.75, stdev=215.65, samples=8 00:18:02.637 lat (usec) : 500=49.19%, 750=44.99%, 1000=5.23% 00:18:02.637 lat (msec) : 2=0.60% 00:18:02.637 cpu : usr=99.13%, sys=0.10%, ctx=9, majf=0, minf=1326 00:18:02.637 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:02.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:02.637 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:02.637 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:02.637 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:02.637 00:18:02.637 Run status group 0 (all jobs): 00:18:02.637 READ: bw=63.0MiB/s (66.1MB/s), 63.0MiB/s-63.0MiB/s (66.1MB/s-66.1MB/s), io=255MiB (267MB), run=4037-4037msec 00:18:02.637 WRITE: bw=63.5MiB/s (66.6MB/s), 63.5MiB/s-63.5MiB/s (66.6MB/s-66.6MB/s), io=256MiB (269MB), run=4033-4033msec 00:18:02.637 ----------------------------------------------------- 00:18:02.637 Suppressions used: 00:18:02.637 count bytes template 00:18:02.637 1 5 /usr/src/fio/parse.c 00:18:02.637 1 8 libtcmalloc_minimal.so 00:18:02.637 1 904 libcrypto.so 00:18:02.637 ----------------------------------------------------- 00:18:02.637 00:18:02.637 21:50:25 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:18:02.637 21:50:25 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:02.637 21:50:25 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:02.637 21:50:25 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:02.637 21:50:25 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:18:02.637 21:50:25 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:02.637 21:50:25 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:02.637 21:50:25 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:02.637 21:50:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:02.637 21:50:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:02.637 21:50:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:02.637 21:50:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:02.637 21:50:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:02.637 21:50:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:02.637 21:50:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:02.637 21:50:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:02.637 21:50:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:02.637 21:50:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:02.637 21:50:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:02.899 21:50:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:02.899 21:50:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:02.899 21:50:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:02.899 21:50:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:02.899 21:50:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:02.899 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:02.899 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:02.899 fio-3.35 00:18:02.899 Starting 2 threads 00:18:29.467 00:18:29.467 first_half: (groupid=0, jobs=1): err= 0: pid=86259: Wed Nov 27 21:50:49 2024 00:18:29.467 read: IOPS=2869, BW=11.2MiB/s (11.8MB/s)(256MiB/22814msec) 00:18:29.467 slat (nsec): min=3064, max=65617, avg=4540.63, stdev=1134.09 00:18:29.467 clat (usec): min=605, max=461004, avg=37561.46, stdev=27203.84 00:18:29.467 lat (usec): min=609, max=461010, avg=37566.00, stdev=27204.00 00:18:29.467 clat percentiles (msec): 00:18:29.467 | 1.00th=[ 7], 5.00th=[ 26], 10.00th=[ 30], 20.00th=[ 31], 00:18:29.467 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 33], 00:18:29.467 | 70.00th=[ 36], 80.00th=[ 37], 90.00th=[ 43], 95.00th=[ 75], 00:18:29.467 | 99.00th=[ 157], 99.50th=[ 188], 99.90th=[ 347], 99.95th=[ 418], 00:18:29.467 | 99.99th=[ 456] 00:18:29.467 write: IOPS=2876, BW=11.2MiB/s (11.8MB/s)(256MiB/22786msec); 0 zone resets 00:18:29.467 slat (usec): min=3, max=2195, avg= 6.07, stdev=13.59 00:18:29.467 clat (usec): min=273, max=51224, avg=7003.34, stdev=6753.95 00:18:29.467 lat (usec): min=279, max=51230, avg=7009.40, stdev=6754.27 00:18:29.467 clat percentiles (usec): 00:18:29.467 | 1.00th=[ 725], 5.00th=[ 857], 10.00th=[ 1254], 20.00th=[ 2671], 00:18:29.467 | 30.00th=[ 3523], 40.00th=[ 4359], 50.00th=[ 5014], 60.00th=[ 5604], 00:18:29.467 | 70.00th=[ 6259], 80.00th=[10028], 90.00th=[15533], 95.00th=[22414], 00:18:29.467 | 99.00th=[31589], 99.50th=[33817], 99.90th=[45876], 99.95th=[48497], 00:18:29.467 | 99.99th=[50594] 00:18:29.467 bw ( KiB/s): min= 3024, max=48248, per=100.00%, avg=24800.14, stdev=13435.99, samples=21 00:18:29.467 iops : min= 756, max=12062, avg=6200.00, stdev=3359.04, samples=21 00:18:29.467 lat (usec) : 500=0.04%, 750=0.74%, 1000=3.17% 00:18:29.467 lat (msec) : 2=2.91%, 4=11.10%, 10=23.11%, 20=7.32%, 50=48.07% 00:18:29.467 lat (msec) : 100=1.63%, 250=1.78%, 500=0.12% 00:18:29.467 cpu : usr=99.26%, sys=0.14%, ctx=32, majf=0, minf=5559 00:18:29.467 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:29.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.467 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:29.467 issued rwts: total=65475,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:29.467 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:29.467 second_half: (groupid=0, jobs=1): err= 0: pid=86260: Wed Nov 27 21:50:49 2024 00:18:29.467 read: IOPS=2892, BW=11.3MiB/s (11.8MB/s)(256MiB/22644msec) 00:18:29.467 slat (nsec): min=3102, max=45250, avg=5137.97, stdev=991.57 00:18:29.467 clat (msec): min=9, max=376, avg=37.70, stdev=23.25 00:18:29.467 lat (msec): min=9, max=376, avg=37.70, stdev=23.25 00:18:29.467 clat percentiles (msec): 00:18:29.467 | 1.00th=[ 26], 5.00th=[ 28], 10.00th=[ 30], 20.00th=[ 31], 00:18:29.468 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 33], 00:18:29.468 | 70.00th=[ 36], 80.00th=[ 37], 90.00th=[ 43], 95.00th=[ 70], 00:18:29.468 | 99.00th=[ 157], 99.50th=[ 178], 99.90th=[ 247], 99.95th=[ 255], 00:18:29.468 | 99.99th=[ 309] 00:18:29.468 write: IOPS=2907, BW=11.4MiB/s (11.9MB/s)(256MiB/22544msec); 0 zone resets 00:18:29.468 slat (usec): min=3, max=2797, avg= 6.41, stdev=13.29 00:18:29.468 clat (usec): min=377, max=33556, avg=6538.90, stdev=4854.49 00:18:29.468 lat (usec): min=385, max=33577, avg=6545.31, stdev=4854.82 00:18:29.468 clat percentiles (usec): 00:18:29.468 | 1.00th=[ 889], 5.00th=[ 1926], 10.00th=[ 2573], 20.00th=[ 3228], 00:18:29.468 | 30.00th=[ 3818], 40.00th=[ 4490], 50.00th=[ 5145], 60.00th=[ 5669], 00:18:29.468 | 70.00th=[ 6128], 80.00th=[ 8717], 90.00th=[13960], 95.00th=[17695], 00:18:29.468 | 99.00th=[23200], 99.50th=[26084], 99.90th=[29492], 99.95th=[30016], 00:18:29.468 | 99.99th=[30802] 00:18:29.468 bw ( KiB/s): min= 400, max=47720, per=98.89%, avg=22754.96, stdev=11868.98, samples=23 00:18:29.468 iops : min= 100, max=11930, avg=5688.74, stdev=2967.25, samples=23 00:18:29.468 lat (usec) : 500=0.03%, 750=0.12%, 1000=0.80% 00:18:29.468 lat (msec) : 2=1.72%, 4=13.52%, 10=25.20%, 20=7.25%, 50=47.90% 00:18:29.468 lat (msec) : 100=1.71%, 250=1.72%, 500=0.04% 00:18:29.468 cpu : usr=99.31%, sys=0.16%, ctx=46, majf=0, minf=5579 00:18:29.468 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:29.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:29.468 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:29.468 issued rwts: total=65489,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:29.468 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:29.468 00:18:29.468 Run status group 0 (all jobs): 00:18:29.468 READ: bw=22.4MiB/s (23.5MB/s), 11.2MiB/s-11.3MiB/s (11.8MB/s-11.8MB/s), io=512MiB (536MB), run=22644-22814msec 00:18:29.468 WRITE: bw=22.5MiB/s (23.6MB/s), 11.2MiB/s-11.4MiB/s (11.8MB/s-11.9MB/s), io=512MiB (537MB), run=22544-22786msec 00:18:29.468 ----------------------------------------------------- 00:18:29.468 Suppressions used: 00:18:29.468 count bytes template 00:18:29.468 2 10 /usr/src/fio/parse.c 00:18:29.468 3 288 /usr/src/fio/iolog.c 00:18:29.468 1 8 libtcmalloc_minimal.so 00:18:29.468 1 904 libcrypto.so 00:18:29.468 ----------------------------------------------------- 00:18:29.468 00:18:29.468 21:50:51 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:18:29.468 21:50:51 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:29.468 21:50:51 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:29.468 21:50:51 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:29.468 21:50:51 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:18:29.468 21:50:51 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:29.468 21:50:51 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:29.468 21:50:51 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:29.468 21:50:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:29.468 21:50:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:29.468 21:50:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:29.468 21:50:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:29.468 21:50:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:29.468 21:50:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:29.468 21:50:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:29.468 21:50:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:29.468 21:50:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:29.468 21:50:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:29.468 21:50:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:29.468 21:50:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:29.468 21:50:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:29.468 21:50:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:29.468 21:50:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:29.468 21:50:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:29.468 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:29.468 fio-3.35 00:18:29.468 Starting 1 thread 00:18:44.451 00:18:44.451 test: (groupid=0, jobs=1): err= 0: pid=86557: Wed Nov 27 21:51:06 2024 00:18:44.451 read: IOPS=7854, BW=30.7MiB/s (32.2MB/s)(255MiB/8301msec) 00:18:44.451 slat (nsec): min=3139, max=31627, avg=4907.03, stdev=1163.51 00:18:44.451 clat (usec): min=556, max=31872, avg=16287.75, stdev=1868.10 00:18:44.451 lat (usec): min=560, max=31876, avg=16292.66, stdev=1868.13 00:18:44.451 clat percentiles (usec): 00:18:44.451 | 1.00th=[14746], 5.00th=[15008], 10.00th=[15139], 20.00th=[15270], 00:18:44.451 | 30.00th=[15533], 40.00th=[15664], 50.00th=[15926], 60.00th=[16057], 00:18:44.451 | 70.00th=[16188], 80.00th=[16450], 90.00th=[17171], 95.00th=[20055], 00:18:44.451 | 99.00th=[25035], 99.50th=[26346], 99.90th=[29230], 99.95th=[30540], 00:18:44.451 | 99.99th=[31327] 00:18:44.451 write: IOPS=10.1k, BW=39.6MiB/s (41.6MB/s)(256MiB/6458msec); 0 zone resets 00:18:44.451 slat (usec): min=4, max=1425, avg= 7.83, stdev= 6.97 00:18:44.451 clat (usec): min=570, max=62555, avg=12552.54, stdev=13435.57 00:18:44.451 lat (usec): min=576, max=62572, avg=12560.36, stdev=13435.62 00:18:44.451 clat percentiles (usec): 00:18:44.451 | 1.00th=[ 824], 5.00th=[ 1045], 10.00th=[ 1156], 20.00th=[ 1336], 00:18:44.451 | 30.00th=[ 1532], 40.00th=[ 2540], 50.00th=[10421], 60.00th=[12780], 00:18:44.451 | 70.00th=[14615], 80.00th=[16909], 90.00th=[37487], 95.00th=[39584], 00:18:44.451 | 99.00th=[53216], 99.50th=[57410], 99.90th=[61080], 99.95th=[61080], 00:18:44.451 | 99.99th=[62129] 00:18:44.451 bw ( KiB/s): min=31784, max=45832, per=99.34%, avg=40324.77, stdev=4590.00, samples=13 00:18:44.451 iops : min= 7946, max=11458, avg=10081.15, stdev=1147.57, samples=13 00:18:44.451 lat (usec) : 750=0.22%, 1000=1.67% 00:18:44.451 lat (msec) : 2=17.06%, 4=1.88%, 10=3.57%, 20=64.96%, 50=9.88% 00:18:44.451 lat (msec) : 100=0.76% 00:18:44.451 cpu : usr=98.88%, sys=0.28%, ctx=33, majf=0, minf=5577 00:18:44.451 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:44.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:44.451 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:44.451 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:44.451 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:44.451 00:18:44.451 Run status group 0 (all jobs): 00:18:44.451 READ: bw=30.7MiB/s (32.2MB/s), 30.7MiB/s-30.7MiB/s (32.2MB/s-32.2MB/s), io=255MiB (267MB), run=8301-8301msec 00:18:44.451 WRITE: bw=39.6MiB/s (41.6MB/s), 39.6MiB/s-39.6MiB/s (41.6MB/s-41.6MB/s), io=256MiB (268MB), run=6458-6458msec 00:18:44.712 ----------------------------------------------------- 00:18:44.712 Suppressions used: 00:18:44.712 count bytes template 00:18:44.712 1 5 /usr/src/fio/parse.c 00:18:44.712 2 192 /usr/src/fio/iolog.c 00:18:44.712 1 8 libtcmalloc_minimal.so 00:18:44.712 1 904 libcrypto.so 00:18:44.712 ----------------------------------------------------- 00:18:44.712 00:18:44.712 21:51:07 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:18:44.712 21:51:07 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:44.712 21:51:07 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:44.974 21:51:07 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:44.974 21:51:07 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:18:44.974 Remove shared memory files 00:18:44.974 21:51:07 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:44.974 21:51:07 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:18:44.974 21:51:07 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:18:44.974 21:51:07 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid69053 /dev/shm/spdk_tgt_trace.pid84948 00:18:44.974 21:51:07 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:44.974 21:51:07 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:18:44.974 00:18:44.974 real 0m59.734s 00:18:44.974 user 2m10.229s 00:18:44.974 sys 0m2.706s 00:18:44.974 21:51:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:44.974 21:51:07 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:44.974 ************************************ 00:18:44.974 END TEST ftl_fio_basic 00:18:44.974 ************************************ 00:18:44.974 21:51:07 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:44.974 21:51:07 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:44.974 21:51:07 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:44.974 21:51:07 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:44.974 ************************************ 00:18:44.974 START TEST ftl_bdevperf 00:18:44.974 ************************************ 00:18:44.974 21:51:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:44.974 * Looking for test storage... 00:18:44.974 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:44.974 21:51:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:44.974 21:51:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:18:44.974 21:51:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:44.974 21:51:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:44.974 21:51:08 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:44.974 21:51:08 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:44.974 21:51:08 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:44.974 21:51:08 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:18:44.974 21:51:08 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:18:44.974 21:51:08 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:18:44.974 21:51:08 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:18:44.974 21:51:08 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:18:44.974 21:51:08 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:18:44.974 21:51:08 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:18:44.974 21:51:08 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:44.974 21:51:08 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:18:44.974 21:51:08 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:18:44.974 21:51:08 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:44.974 21:51:08 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:44.974 21:51:08 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:18:44.974 21:51:08 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:18:44.974 21:51:08 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:44.974 21:51:08 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:18:44.974 21:51:08 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:44.974 21:51:08 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:18:44.974 21:51:08 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:18:44.974 21:51:08 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:44.974 21:51:08 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:18:44.974 21:51:08 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:44.974 21:51:08 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:44.974 21:51:08 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:44.974 21:51:08 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:18:44.974 21:51:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:44.974 21:51:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:44.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.974 --rc genhtml_branch_coverage=1 00:18:44.974 --rc genhtml_function_coverage=1 00:18:44.974 --rc genhtml_legend=1 00:18:44.974 --rc geninfo_all_blocks=1 00:18:44.974 --rc geninfo_unexecuted_blocks=1 00:18:44.974 00:18:44.974 ' 00:18:44.974 21:51:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:44.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.974 --rc genhtml_branch_coverage=1 00:18:44.974 --rc genhtml_function_coverage=1 00:18:44.974 --rc genhtml_legend=1 00:18:44.974 --rc geninfo_all_blocks=1 00:18:44.974 --rc geninfo_unexecuted_blocks=1 00:18:44.974 00:18:44.974 ' 00:18:44.974 21:51:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:44.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.974 --rc genhtml_branch_coverage=1 00:18:44.974 --rc genhtml_function_coverage=1 00:18:44.974 --rc genhtml_legend=1 00:18:44.974 --rc geninfo_all_blocks=1 00:18:44.974 --rc geninfo_unexecuted_blocks=1 00:18:44.974 00:18:44.974 ' 00:18:44.974 21:51:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:44.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.974 --rc genhtml_branch_coverage=1 00:18:44.974 --rc genhtml_function_coverage=1 00:18:44.974 --rc genhtml_legend=1 00:18:44.974 --rc geninfo_all_blocks=1 00:18:44.974 --rc geninfo_unexecuted_blocks=1 00:18:44.974 00:18:44.974 ' 00:18:44.974 21:51:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:44.974 21:51:08 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:18:45.235 21:51:08 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:45.236 21:51:08 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:45.236 21:51:08 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:45.236 21:51:08 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:45.236 21:51:08 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:45.236 21:51:08 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:45.236 21:51:08 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:45.236 21:51:08 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:45.236 21:51:08 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:45.236 21:51:08 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:45.236 21:51:08 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:45.236 21:51:08 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:45.236 21:51:08 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:45.236 21:51:08 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:45.236 21:51:08 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:45.236 21:51:08 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:45.236 21:51:08 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:45.236 21:51:08 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:45.236 21:51:08 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:45.236 21:51:08 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:45.236 21:51:08 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:45.236 21:51:08 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:45.236 21:51:08 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:45.236 21:51:08 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:45.236 21:51:08 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:45.236 21:51:08 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:45.236 21:51:08 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:45.236 21:51:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:18:45.236 21:51:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:18:45.236 21:51:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:18:45.236 21:51:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:45.236 21:51:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:18:45.236 21:51:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=86800 00:18:45.236 21:51:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:18:45.236 21:51:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 86800 00:18:45.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.236 21:51:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:18:45.236 21:51:08 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 86800 ']' 00:18:45.236 21:51:08 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.236 21:51:08 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:45.236 21:51:08 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.236 21:51:08 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:45.236 21:51:08 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:45.236 [2024-11-27 21:51:08.174419] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:18:45.236 [2024-11-27 21:51:08.174564] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86800 ] 00:18:45.236 [2024-11-27 21:51:08.323617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.236 [2024-11-27 21:51:08.354869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.179 21:51:09 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:46.179 21:51:09 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:18:46.179 21:51:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:46.179 21:51:09 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:18:46.179 21:51:09 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:46.179 21:51:09 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:18:46.179 21:51:09 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:18:46.179 21:51:09 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:46.441 21:51:09 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:46.441 21:51:09 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:18:46.441 21:51:09 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:46.441 21:51:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:46.441 21:51:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:46.441 21:51:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:46.441 21:51:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:46.441 21:51:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:46.441 21:51:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:46.441 { 00:18:46.441 "name": "nvme0n1", 00:18:46.441 "aliases": [ 00:18:46.441 "2aca0e2f-a116-4805-baec-1090134dce50" 00:18:46.441 ], 00:18:46.441 "product_name": "NVMe disk", 00:18:46.441 "block_size": 4096, 00:18:46.441 "num_blocks": 1310720, 00:18:46.441 "uuid": "2aca0e2f-a116-4805-baec-1090134dce50", 00:18:46.441 "numa_id": -1, 00:18:46.441 "assigned_rate_limits": { 00:18:46.441 "rw_ios_per_sec": 0, 00:18:46.441 "rw_mbytes_per_sec": 0, 00:18:46.441 "r_mbytes_per_sec": 0, 00:18:46.441 "w_mbytes_per_sec": 0 00:18:46.441 }, 00:18:46.441 "claimed": true, 00:18:46.441 "claim_type": "read_many_write_one", 00:18:46.441 "zoned": false, 00:18:46.441 "supported_io_types": { 00:18:46.441 "read": true, 00:18:46.441 "write": true, 00:18:46.441 "unmap": true, 00:18:46.441 "flush": true, 00:18:46.441 "reset": true, 00:18:46.441 "nvme_admin": true, 00:18:46.441 "nvme_io": true, 00:18:46.441 "nvme_io_md": false, 00:18:46.441 "write_zeroes": true, 00:18:46.441 "zcopy": false, 00:18:46.441 "get_zone_info": false, 00:18:46.441 "zone_management": false, 00:18:46.441 "zone_append": false, 00:18:46.441 "compare": true, 00:18:46.441 "compare_and_write": false, 00:18:46.441 "abort": true, 00:18:46.441 "seek_hole": false, 00:18:46.441 "seek_data": false, 00:18:46.441 "copy": true, 00:18:46.441 "nvme_iov_md": false 00:18:46.441 }, 00:18:46.441 "driver_specific": { 00:18:46.441 "nvme": [ 00:18:46.441 { 00:18:46.441 "pci_address": "0000:00:11.0", 00:18:46.441 "trid": { 00:18:46.441 "trtype": "PCIe", 00:18:46.441 "traddr": "0000:00:11.0" 00:18:46.441 }, 00:18:46.441 "ctrlr_data": { 00:18:46.441 "cntlid": 0, 00:18:46.441 "vendor_id": "0x1b36", 00:18:46.441 "model_number": "QEMU NVMe Ctrl", 00:18:46.441 "serial_number": "12341", 00:18:46.441 "firmware_revision": "8.0.0", 00:18:46.441 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:46.441 "oacs": { 00:18:46.441 "security": 0, 00:18:46.441 "format": 1, 00:18:46.441 "firmware": 0, 00:18:46.441 "ns_manage": 1 00:18:46.441 }, 00:18:46.441 "multi_ctrlr": false, 00:18:46.441 "ana_reporting": false 00:18:46.441 }, 00:18:46.441 "vs": { 00:18:46.441 "nvme_version": "1.4" 00:18:46.441 }, 00:18:46.441 "ns_data": { 00:18:46.441 "id": 1, 00:18:46.441 "can_share": false 00:18:46.441 } 00:18:46.441 } 00:18:46.441 ], 00:18:46.441 "mp_policy": "active_passive" 00:18:46.441 } 00:18:46.441 } 00:18:46.441 ]' 00:18:46.702 21:51:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:46.702 21:51:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:46.702 21:51:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:46.702 21:51:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:46.702 21:51:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:46.702 21:51:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:18:46.702 21:51:09 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:18:46.702 21:51:09 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:46.702 21:51:09 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:18:46.702 21:51:09 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:46.702 21:51:09 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:46.964 21:51:09 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=cd0add2f-c8f0-44ca-9f19-707150f344a4 00:18:46.964 21:51:09 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:18:46.964 21:51:09 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cd0add2f-c8f0-44ca-9f19-707150f344a4 00:18:46.964 21:51:10 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:47.225 21:51:10 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=483523ae-5747-4114-b1ea-f0528d3378ed 00:18:47.225 21:51:10 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 483523ae-5747-4114-b1ea-f0528d3378ed 00:18:47.487 21:51:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=2e10147c-aabf-4ad1-8e0d-554ee7356957 00:18:47.487 21:51:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 2e10147c-aabf-4ad1-8e0d-554ee7356957 00:18:47.487 21:51:10 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:18:47.487 21:51:10 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:47.487 21:51:10 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=2e10147c-aabf-4ad1-8e0d-554ee7356957 00:18:47.487 21:51:10 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:18:47.487 21:51:10 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 2e10147c-aabf-4ad1-8e0d-554ee7356957 00:18:47.487 21:51:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=2e10147c-aabf-4ad1-8e0d-554ee7356957 00:18:47.487 21:51:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:47.487 21:51:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:47.487 21:51:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:47.487 21:51:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2e10147c-aabf-4ad1-8e0d-554ee7356957 00:18:47.747 21:51:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:47.747 { 00:18:47.747 "name": "2e10147c-aabf-4ad1-8e0d-554ee7356957", 00:18:47.747 "aliases": [ 00:18:47.747 "lvs/nvme0n1p0" 00:18:47.747 ], 00:18:47.747 "product_name": "Logical Volume", 00:18:47.747 "block_size": 4096, 00:18:47.747 "num_blocks": 26476544, 00:18:47.747 "uuid": "2e10147c-aabf-4ad1-8e0d-554ee7356957", 00:18:47.747 "assigned_rate_limits": { 00:18:47.747 "rw_ios_per_sec": 0, 00:18:47.747 "rw_mbytes_per_sec": 0, 00:18:47.747 "r_mbytes_per_sec": 0, 00:18:47.747 "w_mbytes_per_sec": 0 00:18:47.747 }, 00:18:47.747 "claimed": false, 00:18:47.747 "zoned": false, 00:18:47.747 "supported_io_types": { 00:18:47.747 "read": true, 00:18:47.747 "write": true, 00:18:47.747 "unmap": true, 00:18:47.747 "flush": false, 00:18:47.747 "reset": true, 00:18:47.747 "nvme_admin": false, 00:18:47.747 "nvme_io": false, 00:18:47.747 "nvme_io_md": false, 00:18:47.747 "write_zeroes": true, 00:18:47.747 "zcopy": false, 00:18:47.747 "get_zone_info": false, 00:18:47.747 "zone_management": false, 00:18:47.747 "zone_append": false, 00:18:47.747 "compare": false, 00:18:47.747 "compare_and_write": false, 00:18:47.747 "abort": false, 00:18:47.747 "seek_hole": true, 00:18:47.747 "seek_data": true, 00:18:47.747 "copy": false, 00:18:47.747 "nvme_iov_md": false 00:18:47.747 }, 00:18:47.747 "driver_specific": { 00:18:47.747 "lvol": { 00:18:47.747 "lvol_store_uuid": "483523ae-5747-4114-b1ea-f0528d3378ed", 00:18:47.747 "base_bdev": "nvme0n1", 00:18:47.747 "thin_provision": true, 00:18:47.747 "num_allocated_clusters": 0, 00:18:47.747 "snapshot": false, 00:18:47.747 "clone": false, 00:18:47.747 "esnap_clone": false 00:18:47.747 } 00:18:47.747 } 00:18:47.747 } 00:18:47.747 ]' 00:18:47.747 21:51:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:47.747 21:51:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:47.747 21:51:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:47.747 21:51:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:47.747 21:51:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:47.747 21:51:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:47.747 21:51:10 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:18:47.747 21:51:10 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:18:47.747 21:51:10 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:48.005 21:51:11 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:48.005 21:51:11 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:48.005 21:51:11 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 2e10147c-aabf-4ad1-8e0d-554ee7356957 00:18:48.005 21:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=2e10147c-aabf-4ad1-8e0d-554ee7356957 00:18:48.005 21:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:48.005 21:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:48.005 21:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:48.005 21:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2e10147c-aabf-4ad1-8e0d-554ee7356957 00:18:48.263 21:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:48.263 { 00:18:48.263 "name": "2e10147c-aabf-4ad1-8e0d-554ee7356957", 00:18:48.263 "aliases": [ 00:18:48.263 "lvs/nvme0n1p0" 00:18:48.263 ], 00:18:48.263 "product_name": "Logical Volume", 00:18:48.263 "block_size": 4096, 00:18:48.263 "num_blocks": 26476544, 00:18:48.263 "uuid": "2e10147c-aabf-4ad1-8e0d-554ee7356957", 00:18:48.263 "assigned_rate_limits": { 00:18:48.263 "rw_ios_per_sec": 0, 00:18:48.263 "rw_mbytes_per_sec": 0, 00:18:48.263 "r_mbytes_per_sec": 0, 00:18:48.263 "w_mbytes_per_sec": 0 00:18:48.263 }, 00:18:48.263 "claimed": false, 00:18:48.263 "zoned": false, 00:18:48.263 "supported_io_types": { 00:18:48.263 "read": true, 00:18:48.263 "write": true, 00:18:48.263 "unmap": true, 00:18:48.263 "flush": false, 00:18:48.263 "reset": true, 00:18:48.263 "nvme_admin": false, 00:18:48.263 "nvme_io": false, 00:18:48.263 "nvme_io_md": false, 00:18:48.263 "write_zeroes": true, 00:18:48.263 "zcopy": false, 00:18:48.263 "get_zone_info": false, 00:18:48.263 "zone_management": false, 00:18:48.263 "zone_append": false, 00:18:48.263 "compare": false, 00:18:48.263 "compare_and_write": false, 00:18:48.263 "abort": false, 00:18:48.263 "seek_hole": true, 00:18:48.263 "seek_data": true, 00:18:48.263 "copy": false, 00:18:48.263 "nvme_iov_md": false 00:18:48.263 }, 00:18:48.263 "driver_specific": { 00:18:48.263 "lvol": { 00:18:48.263 "lvol_store_uuid": "483523ae-5747-4114-b1ea-f0528d3378ed", 00:18:48.263 "base_bdev": "nvme0n1", 00:18:48.263 "thin_provision": true, 00:18:48.263 "num_allocated_clusters": 0, 00:18:48.263 "snapshot": false, 00:18:48.263 "clone": false, 00:18:48.263 "esnap_clone": false 00:18:48.263 } 00:18:48.263 } 00:18:48.263 } 00:18:48.263 ]' 00:18:48.263 21:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:48.263 21:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:48.263 21:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:48.263 21:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:48.263 21:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:48.263 21:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:48.263 21:51:11 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:18:48.263 21:51:11 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:48.521 21:51:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:18:48.521 21:51:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 2e10147c-aabf-4ad1-8e0d-554ee7356957 00:18:48.521 21:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=2e10147c-aabf-4ad1-8e0d-554ee7356957 00:18:48.521 21:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:48.521 21:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:48.521 21:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:48.521 21:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2e10147c-aabf-4ad1-8e0d-554ee7356957 00:18:48.780 21:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:48.780 { 00:18:48.780 "name": "2e10147c-aabf-4ad1-8e0d-554ee7356957", 00:18:48.780 "aliases": [ 00:18:48.780 "lvs/nvme0n1p0" 00:18:48.780 ], 00:18:48.780 "product_name": "Logical Volume", 00:18:48.780 "block_size": 4096, 00:18:48.780 "num_blocks": 26476544, 00:18:48.780 "uuid": "2e10147c-aabf-4ad1-8e0d-554ee7356957", 00:18:48.780 "assigned_rate_limits": { 00:18:48.780 "rw_ios_per_sec": 0, 00:18:48.780 "rw_mbytes_per_sec": 0, 00:18:48.780 "r_mbytes_per_sec": 0, 00:18:48.780 "w_mbytes_per_sec": 0 00:18:48.780 }, 00:18:48.780 "claimed": false, 00:18:48.780 "zoned": false, 00:18:48.780 "supported_io_types": { 00:18:48.780 "read": true, 00:18:48.780 "write": true, 00:18:48.780 "unmap": true, 00:18:48.780 "flush": false, 00:18:48.780 "reset": true, 00:18:48.780 "nvme_admin": false, 00:18:48.780 "nvme_io": false, 00:18:48.780 "nvme_io_md": false, 00:18:48.780 "write_zeroes": true, 00:18:48.780 "zcopy": false, 00:18:48.780 "get_zone_info": false, 00:18:48.780 "zone_management": false, 00:18:48.780 "zone_append": false, 00:18:48.780 "compare": false, 00:18:48.780 "compare_and_write": false, 00:18:48.780 "abort": false, 00:18:48.780 "seek_hole": true, 00:18:48.780 "seek_data": true, 00:18:48.780 "copy": false, 00:18:48.780 "nvme_iov_md": false 00:18:48.780 }, 00:18:48.780 "driver_specific": { 00:18:48.780 "lvol": { 00:18:48.780 "lvol_store_uuid": "483523ae-5747-4114-b1ea-f0528d3378ed", 00:18:48.780 "base_bdev": "nvme0n1", 00:18:48.780 "thin_provision": true, 00:18:48.780 "num_allocated_clusters": 0, 00:18:48.780 "snapshot": false, 00:18:48.780 "clone": false, 00:18:48.780 "esnap_clone": false 00:18:48.780 } 00:18:48.780 } 00:18:48.780 } 00:18:48.780 ]' 00:18:48.780 21:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:48.780 21:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:48.780 21:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:48.780 21:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:48.780 21:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:48.780 21:51:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:48.780 21:51:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:18:48.780 21:51:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 2e10147c-aabf-4ad1-8e0d-554ee7356957 -c nvc0n1p0 --l2p_dram_limit 20 00:18:49.039 [2024-11-27 21:51:11.996707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.039 [2024-11-27 21:51:11.996746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:49.039 [2024-11-27 21:51:11.996759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:49.039 [2024-11-27 21:51:11.996765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.039 [2024-11-27 21:51:11.996812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.039 [2024-11-27 21:51:11.996820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:49.039 [2024-11-27 21:51:11.996829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:49.039 [2024-11-27 21:51:11.996834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.039 [2024-11-27 21:51:11.996850] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:49.039 [2024-11-27 21:51:11.997051] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:49.039 [2024-11-27 21:51:11.997065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.039 [2024-11-27 21:51:11.997072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:49.039 [2024-11-27 21:51:11.997080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:18:49.039 [2024-11-27 21:51:11.997086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.039 [2024-11-27 21:51:11.997132] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 1a0f5ff4-c9f5-4de8-8448-bde593e1bcb8 00:18:49.039 [2024-11-27 21:51:11.998106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.039 [2024-11-27 21:51:11.998210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:49.039 [2024-11-27 21:51:11.998223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:18:49.039 [2024-11-27 21:51:11.998233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.039 [2024-11-27 21:51:12.002881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.039 [2024-11-27 21:51:12.002912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:49.039 [2024-11-27 21:51:12.002920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.619 ms 00:18:49.039 [2024-11-27 21:51:12.002932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.039 [2024-11-27 21:51:12.002985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.039 [2024-11-27 21:51:12.002995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:49.039 [2024-11-27 21:51:12.003001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:18:49.039 [2024-11-27 21:51:12.003008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.039 [2024-11-27 21:51:12.003039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.039 [2024-11-27 21:51:12.003048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:49.039 [2024-11-27 21:51:12.003055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:49.039 [2024-11-27 21:51:12.003065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.039 [2024-11-27 21:51:12.003080] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:49.039 [2024-11-27 21:51:12.004329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.039 [2024-11-27 21:51:12.004361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:49.039 [2024-11-27 21:51:12.004370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.252 ms 00:18:49.039 [2024-11-27 21:51:12.004378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.039 [2024-11-27 21:51:12.004403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.039 [2024-11-27 21:51:12.004409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:49.039 [2024-11-27 21:51:12.004418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:49.039 [2024-11-27 21:51:12.004426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.039 [2024-11-27 21:51:12.004445] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:49.039 [2024-11-27 21:51:12.004554] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:49.039 [2024-11-27 21:51:12.004566] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:49.039 [2024-11-27 21:51:12.004574] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:49.039 [2024-11-27 21:51:12.004584] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:49.039 [2024-11-27 21:51:12.004593] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:49.039 [2024-11-27 21:51:12.004601] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:49.039 [2024-11-27 21:51:12.004606] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:49.039 [2024-11-27 21:51:12.004616] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:49.039 [2024-11-27 21:51:12.004623] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:49.040 [2024-11-27 21:51:12.004631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.040 [2024-11-27 21:51:12.004636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:49.040 [2024-11-27 21:51:12.004644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:18:49.040 [2024-11-27 21:51:12.004650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.040 [2024-11-27 21:51:12.004714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.040 [2024-11-27 21:51:12.004722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:49.040 [2024-11-27 21:51:12.004732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:49.040 [2024-11-27 21:51:12.004738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.040 [2024-11-27 21:51:12.004810] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:49.040 [2024-11-27 21:51:12.004817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:49.040 [2024-11-27 21:51:12.004824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:49.040 [2024-11-27 21:51:12.004831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:49.040 [2024-11-27 21:51:12.004838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:49.040 [2024-11-27 21:51:12.004843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:49.040 [2024-11-27 21:51:12.004850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:49.040 [2024-11-27 21:51:12.004855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:49.040 [2024-11-27 21:51:12.004862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:49.040 [2024-11-27 21:51:12.004866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:49.040 [2024-11-27 21:51:12.004872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:49.040 [2024-11-27 21:51:12.004878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:49.040 [2024-11-27 21:51:12.004886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:49.040 [2024-11-27 21:51:12.004891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:49.040 [2024-11-27 21:51:12.004897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:49.040 [2024-11-27 21:51:12.004902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:49.040 [2024-11-27 21:51:12.004909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:49.040 [2024-11-27 21:51:12.004915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:49.040 [2024-11-27 21:51:12.004921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:49.040 [2024-11-27 21:51:12.004926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:49.040 [2024-11-27 21:51:12.004932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:49.040 [2024-11-27 21:51:12.004937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:49.040 [2024-11-27 21:51:12.004943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:49.040 [2024-11-27 21:51:12.004948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:49.040 [2024-11-27 21:51:12.004954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:49.040 [2024-11-27 21:51:12.004959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:49.040 [2024-11-27 21:51:12.004965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:49.040 [2024-11-27 21:51:12.004970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:49.040 [2024-11-27 21:51:12.004979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:49.040 [2024-11-27 21:51:12.004984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:49.040 [2024-11-27 21:51:12.004990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:49.040 [2024-11-27 21:51:12.004994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:49.040 [2024-11-27 21:51:12.005000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:49.040 [2024-11-27 21:51:12.005007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:49.040 [2024-11-27 21:51:12.005013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:49.040 [2024-11-27 21:51:12.005017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:49.040 [2024-11-27 21:51:12.005024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:49.040 [2024-11-27 21:51:12.005029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:49.040 [2024-11-27 21:51:12.005035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:49.040 [2024-11-27 21:51:12.005040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:49.040 [2024-11-27 21:51:12.005046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:49.040 [2024-11-27 21:51:12.005051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:49.040 [2024-11-27 21:51:12.005057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:49.040 [2024-11-27 21:51:12.005062] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:49.040 [2024-11-27 21:51:12.005075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:49.040 [2024-11-27 21:51:12.005080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:49.040 [2024-11-27 21:51:12.005087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:49.040 [2024-11-27 21:51:12.005093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:49.040 [2024-11-27 21:51:12.005100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:49.040 [2024-11-27 21:51:12.005105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:49.040 [2024-11-27 21:51:12.005111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:49.040 [2024-11-27 21:51:12.005116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:49.040 [2024-11-27 21:51:12.005123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:49.040 [2024-11-27 21:51:12.005131] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:49.040 [2024-11-27 21:51:12.005139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:49.040 [2024-11-27 21:51:12.005147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:49.040 [2024-11-27 21:51:12.005154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:49.040 [2024-11-27 21:51:12.005159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:49.040 [2024-11-27 21:51:12.005166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:49.040 [2024-11-27 21:51:12.005171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:49.040 [2024-11-27 21:51:12.005180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:49.040 [2024-11-27 21:51:12.005185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:49.040 [2024-11-27 21:51:12.005195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:49.040 [2024-11-27 21:51:12.005200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:49.040 [2024-11-27 21:51:12.005207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:49.040 [2024-11-27 21:51:12.005212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:49.040 [2024-11-27 21:51:12.005219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:49.040 [2024-11-27 21:51:12.005225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:49.040 [2024-11-27 21:51:12.005232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:49.040 [2024-11-27 21:51:12.005237] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:49.040 [2024-11-27 21:51:12.005245] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:49.040 [2024-11-27 21:51:12.005250] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:49.040 [2024-11-27 21:51:12.005257] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:49.040 [2024-11-27 21:51:12.005263] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:49.040 [2024-11-27 21:51:12.005271] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:49.040 [2024-11-27 21:51:12.005276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.040 [2024-11-27 21:51:12.005285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:49.040 [2024-11-27 21:51:12.005291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.519 ms 00:18:49.040 [2024-11-27 21:51:12.005298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.040 [2024-11-27 21:51:12.005321] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:49.040 [2024-11-27 21:51:12.005329] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:53.243 [2024-11-27 21:51:15.555454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.243 [2024-11-27 21:51:15.555723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:53.243 [2024-11-27 21:51:15.555836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3550.118 ms 00:18:53.243 [2024-11-27 21:51:15.555873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.243 [2024-11-27 21:51:15.570011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.243 [2024-11-27 21:51:15.570224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:53.243 [2024-11-27 21:51:15.570322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.942 ms 00:18:53.243 [2024-11-27 21:51:15.570387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.243 [2024-11-27 21:51:15.570533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.243 [2024-11-27 21:51:15.570566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:53.243 [2024-11-27 21:51:15.570593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:18:53.243 [2024-11-27 21:51:15.570622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.243 [2024-11-27 21:51:15.592763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.243 [2024-11-27 21:51:15.593052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:53.243 [2024-11-27 21:51:15.593085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.985 ms 00:18:53.243 [2024-11-27 21:51:15.593103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.243 [2024-11-27 21:51:15.593164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.243 [2024-11-27 21:51:15.593182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:53.243 [2024-11-27 21:51:15.593197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:53.243 [2024-11-27 21:51:15.593212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.243 [2024-11-27 21:51:15.593888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.243 [2024-11-27 21:51:15.593940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:53.243 [2024-11-27 21:51:15.593957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.599 ms 00:18:53.243 [2024-11-27 21:51:15.593978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.243 [2024-11-27 21:51:15.594162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.243 [2024-11-27 21:51:15.594197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:53.243 [2024-11-27 21:51:15.594210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:18:53.243 [2024-11-27 21:51:15.594225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.243 [2024-11-27 21:51:15.602557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.243 [2024-11-27 21:51:15.602606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:53.243 [2024-11-27 21:51:15.602622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.307 ms 00:18:53.243 [2024-11-27 21:51:15.602632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.243 [2024-11-27 21:51:15.612953] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:18:53.243 [2024-11-27 21:51:15.621081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.244 [2024-11-27 21:51:15.621124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:53.244 [2024-11-27 21:51:15.621138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.349 ms 00:18:53.244 [2024-11-27 21:51:15.621146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.244 [2024-11-27 21:51:15.711163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.244 [2024-11-27 21:51:15.711225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:53.244 [2024-11-27 21:51:15.711244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.982 ms 00:18:53.244 [2024-11-27 21:51:15.711257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.244 [2024-11-27 21:51:15.711497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.244 [2024-11-27 21:51:15.711510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:53.244 [2024-11-27 21:51:15.711527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:18:53.244 [2024-11-27 21:51:15.711535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.244 [2024-11-27 21:51:15.717320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.244 [2024-11-27 21:51:15.717386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:53.244 [2024-11-27 21:51:15.717408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.742 ms 00:18:53.244 [2024-11-27 21:51:15.717420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.244 [2024-11-27 21:51:15.722653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.244 [2024-11-27 21:51:15.722853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:53.244 [2024-11-27 21:51:15.722884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.183 ms 00:18:53.244 [2024-11-27 21:51:15.722893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.244 [2024-11-27 21:51:15.723613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.244 [2024-11-27 21:51:15.723662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:53.244 [2024-11-27 21:51:15.723694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.382 ms 00:18:53.244 [2024-11-27 21:51:15.723720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.244 [2024-11-27 21:51:15.766114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.244 [2024-11-27 21:51:15.766318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:53.244 [2024-11-27 21:51:15.766373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.350 ms 00:18:53.244 [2024-11-27 21:51:15.766391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.244 [2024-11-27 21:51:15.773471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.244 [2024-11-27 21:51:15.773520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:53.244 [2024-11-27 21:51:15.773535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.010 ms 00:18:53.244 [2024-11-27 21:51:15.773544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.244 [2024-11-27 21:51:15.779331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.244 [2024-11-27 21:51:15.779392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:53.244 [2024-11-27 21:51:15.779405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.730 ms 00:18:53.244 [2024-11-27 21:51:15.779412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.244 [2024-11-27 21:51:15.785463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.244 [2024-11-27 21:51:15.785509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:53.244 [2024-11-27 21:51:15.785526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.002 ms 00:18:53.244 [2024-11-27 21:51:15.785534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.244 [2024-11-27 21:51:15.785589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.244 [2024-11-27 21:51:15.785598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:53.244 [2024-11-27 21:51:15.785610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:53.244 [2024-11-27 21:51:15.785618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.244 [2024-11-27 21:51:15.785693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.244 [2024-11-27 21:51:15.785705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:53.244 [2024-11-27 21:51:15.785715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:18:53.244 [2024-11-27 21:51:15.785723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.244 [2024-11-27 21:51:15.786959] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3789.624 ms, result 0 00:18:53.244 { 00:18:53.244 "name": "ftl0", 00:18:53.244 "uuid": "1a0f5ff4-c9f5-4de8-8448-bde593e1bcb8" 00:18:53.244 } 00:18:53.244 21:51:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:18:53.244 21:51:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:18:53.244 21:51:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:18:53.244 21:51:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:18:53.244 [2024-11-27 21:51:16.133159] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:53.244 I/O size of 69632 is greater than zero copy threshold (65536). 00:18:53.244 Zero copy mechanism will not be used. 00:18:53.244 Running I/O for 4 seconds... 00:18:55.125 731.00 IOPS, 48.54 MiB/s [2024-11-27T21:51:19.180Z] 1430.00 IOPS, 94.96 MiB/s [2024-11-27T21:51:20.564Z] 1576.33 IOPS, 104.68 MiB/s [2024-11-27T21:51:20.564Z] 1730.25 IOPS, 114.90 MiB/s 00:18:57.443 Latency(us) 00:18:57.443 [2024-11-27T21:51:20.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.443 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:18:57.443 ftl0 : 4.00 1729.52 114.85 0.00 0.00 607.74 150.45 2583.63 00:18:57.443 [2024-11-27T21:51:20.564Z] =================================================================================================================== 00:18:57.443 [2024-11-27T21:51:20.564Z] Total : 1729.52 114.85 0.00 0.00 607.74 150.45 2583.63 00:18:57.443 [2024-11-27 21:51:20.142378] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:57.443 { 00:18:57.443 "results": [ 00:18:57.443 { 00:18:57.443 "job": "ftl0", 00:18:57.443 "core_mask": "0x1", 00:18:57.443 "workload": "randwrite", 00:18:57.443 "status": "finished", 00:18:57.443 "queue_depth": 1, 00:18:57.443 "io_size": 69632, 00:18:57.443 "runtime": 4.002274, 00:18:57.443 "iops": 1729.5167697164163, 00:18:57.443 "mibps": 114.85072298898078, 00:18:57.443 "io_failed": 0, 00:18:57.443 "io_timeout": 0, 00:18:57.443 "avg_latency_us": 607.7353401640255, 00:18:57.443 "min_latency_us": 150.44923076923078, 00:18:57.443 "max_latency_us": 2583.630769230769 00:18:57.443 } 00:18:57.443 ], 00:18:57.443 "core_count": 1 00:18:57.443 } 00:18:57.443 21:51:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:18:57.443 [2024-11-27 21:51:20.261655] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:57.443 Running I/O for 4 seconds... 00:18:59.325 5821.00 IOPS, 22.74 MiB/s [2024-11-27T21:51:23.386Z] 6290.50 IOPS, 24.57 MiB/s [2024-11-27T21:51:24.331Z] 5907.33 IOPS, 23.08 MiB/s [2024-11-27T21:51:24.331Z] 5713.50 IOPS, 22.32 MiB/s 00:19:01.210 Latency(us) 00:19:01.210 [2024-11-27T21:51:24.331Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.210 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:19:01.210 ftl0 : 4.05 5678.49 22.18 0.00 0.00 22424.72 280.42 54041.99 00:19:01.210 [2024-11-27T21:51:24.331Z] =================================================================================================================== 00:19:01.210 [2024-11-27T21:51:24.331Z] Total : 5678.49 22.18 0.00 0.00 22424.72 0.00 54041.99 00:19:01.210 [2024-11-27 21:51:24.318244] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:01.210 { 00:19:01.210 "results": [ 00:19:01.210 { 00:19:01.210 "job": "ftl0", 00:19:01.210 "core_mask": "0x1", 00:19:01.210 "workload": "randwrite", 00:19:01.210 "status": "finished", 00:19:01.210 "queue_depth": 128, 00:19:01.210 "io_size": 4096, 00:19:01.210 "runtime": 4.0472, 00:19:01.210 "iops": 5678.493773473018, 00:19:01.210 "mibps": 22.181616302628978, 00:19:01.210 "io_failed": 0, 00:19:01.210 "io_timeout": 0, 00:19:01.210 "avg_latency_us": 22424.717573485603, 00:19:01.210 "min_latency_us": 280.41846153846154, 00:19:01.210 "max_latency_us": 54041.99384615385 00:19:01.210 } 00:19:01.210 ], 00:19:01.210 "core_count": 1 00:19:01.210 } 00:19:01.470 21:51:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:19:01.470 [2024-11-27 21:51:24.426949] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:01.470 Running I/O for 4 seconds... 00:19:03.355 4809.00 IOPS, 18.79 MiB/s [2024-11-27T21:51:27.868Z] 4816.50 IOPS, 18.81 MiB/s [2024-11-27T21:51:28.440Z] 4779.33 IOPS, 18.67 MiB/s [2024-11-27T21:51:28.701Z] 4769.00 IOPS, 18.63 MiB/s 00:19:05.580 Latency(us) 00:19:05.580 [2024-11-27T21:51:28.701Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.580 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:05.580 Verification LBA range: start 0x0 length 0x1400000 00:19:05.580 ftl0 : 4.02 4778.85 18.67 0.00 0.00 26696.45 278.84 118569.75 00:19:05.580 [2024-11-27T21:51:28.701Z] =================================================================================================================== 00:19:05.580 [2024-11-27T21:51:28.701Z] Total : 4778.85 18.67 0.00 0.00 26696.45 0.00 118569.75 00:19:05.580 [2024-11-27 21:51:28.454114] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:05.580 { 00:19:05.580 "results": [ 00:19:05.580 { 00:19:05.580 "job": "ftl0", 00:19:05.580 "core_mask": "0x1", 00:19:05.580 "workload": "verify", 00:19:05.580 "status": "finished", 00:19:05.580 "verify_range": { 00:19:05.580 "start": 0, 00:19:05.580 "length": 20971520 00:19:05.580 }, 00:19:05.580 "queue_depth": 128, 00:19:05.580 "io_size": 4096, 00:19:05.580 "runtime": 4.017703, 00:19:05.580 "iops": 4778.850004592176, 00:19:05.580 "mibps": 18.66738283043819, 00:19:05.580 "io_failed": 0, 00:19:05.580 "io_timeout": 0, 00:19:05.580 "avg_latency_us": 26696.44996923077, 00:19:05.580 "min_latency_us": 278.8430769230769, 00:19:05.580 "max_latency_us": 118569.74769230769 00:19:05.580 } 00:19:05.580 ], 00:19:05.580 "core_count": 1 00:19:05.580 } 00:19:05.580 21:51:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:19:05.580 [2024-11-27 21:51:28.666470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.580 [2024-11-27 21:51:28.666531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:05.580 [2024-11-27 21:51:28.666554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:05.580 [2024-11-27 21:51:28.666566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.580 [2024-11-27 21:51:28.666591] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:05.580 [2024-11-27 21:51:28.667332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.580 [2024-11-27 21:51:28.667397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:05.580 [2024-11-27 21:51:28.667408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.725 ms 00:19:05.580 [2024-11-27 21:51:28.667420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.580 [2024-11-27 21:51:28.670602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.580 [2024-11-27 21:51:28.670655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:05.580 [2024-11-27 21:51:28.670666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.154 ms 00:19:05.580 [2024-11-27 21:51:28.670681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.842 [2024-11-27 21:51:28.886938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.842 [2024-11-27 21:51:28.887021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:05.842 [2024-11-27 21:51:28.887038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 216.234 ms 00:19:05.842 [2024-11-27 21:51:28.887050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.842 [2024-11-27 21:51:28.893278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.842 [2024-11-27 21:51:28.893328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:05.842 [2024-11-27 21:51:28.893357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.183 ms 00:19:05.842 [2024-11-27 21:51:28.893377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.842 [2024-11-27 21:51:28.896315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.842 [2024-11-27 21:51:28.896383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:05.842 [2024-11-27 21:51:28.896395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.855 ms 00:19:05.842 [2024-11-27 21:51:28.896404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.842 [2024-11-27 21:51:28.903023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.842 [2024-11-27 21:51:28.903089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:05.842 [2024-11-27 21:51:28.903101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.575 ms 00:19:05.843 [2024-11-27 21:51:28.903122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.843 [2024-11-27 21:51:28.903252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.843 [2024-11-27 21:51:28.903265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:05.843 [2024-11-27 21:51:28.903275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:19:05.843 [2024-11-27 21:51:28.903285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.843 [2024-11-27 21:51:28.906299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.843 [2024-11-27 21:51:28.906371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:05.843 [2024-11-27 21:51:28.906382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.994 ms 00:19:05.843 [2024-11-27 21:51:28.906392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.843 [2024-11-27 21:51:28.908942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.843 [2024-11-27 21:51:28.908997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:05.843 [2024-11-27 21:51:28.909007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.506 ms 00:19:05.843 [2024-11-27 21:51:28.909016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.843 [2024-11-27 21:51:28.911267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.843 [2024-11-27 21:51:28.911484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:05.843 [2024-11-27 21:51:28.911504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.210 ms 00:19:05.843 [2024-11-27 21:51:28.911518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.843 [2024-11-27 21:51:28.913572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.843 [2024-11-27 21:51:28.913626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:05.843 [2024-11-27 21:51:28.913637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.989 ms 00:19:05.843 [2024-11-27 21:51:28.913647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.843 [2024-11-27 21:51:28.913687] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:05.843 [2024-11-27 21:51:28.913730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.913742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.913753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.913762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.913773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.913781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.913793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.913800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.913811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.913818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.913831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.913839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.913848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.913855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.913865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.913873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.913882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.913890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.913900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.913908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.913918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.913925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.913937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.913944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.913972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.913980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.913993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.914001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.914011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.914021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.914030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.914038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.914049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.914058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.914068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.914076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.914086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.914094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.914103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.914111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.914121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.914129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.914140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.914149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.914158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.914165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.914174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.914181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.914193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.914201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.914211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.914219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.914227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.914236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.914246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.914254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.914263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.914271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.914282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.914290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.914300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.914307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.914317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.914324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.914358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.914368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.914378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.914386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.914395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:05.843 [2024-11-27 21:51:28.914403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:05.844 [2024-11-27 21:51:28.914412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:05.844 [2024-11-27 21:51:28.914420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:05.844 [2024-11-27 21:51:28.914431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:05.844 [2024-11-27 21:51:28.914439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:05.844 [2024-11-27 21:51:28.914452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:05.844 [2024-11-27 21:51:28.914460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:05.844 [2024-11-27 21:51:28.914470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:05.844 [2024-11-27 21:51:28.914478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:05.844 [2024-11-27 21:51:28.914488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:05.844 [2024-11-27 21:51:28.914495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:05.844 [2024-11-27 21:51:28.914505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:05.844 [2024-11-27 21:51:28.914514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:05.844 [2024-11-27 21:51:28.914524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:05.844 [2024-11-27 21:51:28.914532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:05.844 [2024-11-27 21:51:28.914553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:05.844 [2024-11-27 21:51:28.914561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:05.844 [2024-11-27 21:51:28.914571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:05.844 [2024-11-27 21:51:28.914578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:05.844 [2024-11-27 21:51:28.914587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:05.844 [2024-11-27 21:51:28.914594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:05.844 [2024-11-27 21:51:28.914606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:05.844 [2024-11-27 21:51:28.914613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:05.844 [2024-11-27 21:51:28.914623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:05.844 [2024-11-27 21:51:28.914630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:05.844 [2024-11-27 21:51:28.914640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:05.844 [2024-11-27 21:51:28.914647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:05.844 [2024-11-27 21:51:28.914658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:05.844 [2024-11-27 21:51:28.914666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:05.844 [2024-11-27 21:51:28.914676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:05.844 [2024-11-27 21:51:28.914684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:05.844 [2024-11-27 21:51:28.914703] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:05.844 [2024-11-27 21:51:28.914711] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1a0f5ff4-c9f5-4de8-8448-bde593e1bcb8 00:19:05.844 [2024-11-27 21:51:28.914731] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:05.844 [2024-11-27 21:51:28.914744] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:05.844 [2024-11-27 21:51:28.914754] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:05.844 [2024-11-27 21:51:28.914762] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:05.844 [2024-11-27 21:51:28.914773] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:05.844 [2024-11-27 21:51:28.914783] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:05.844 [2024-11-27 21:51:28.914797] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:05.844 [2024-11-27 21:51:28.914804] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:05.844 [2024-11-27 21:51:28.914812] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:05.844 [2024-11-27 21:51:28.914820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.844 [2024-11-27 21:51:28.914830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:05.844 [2024-11-27 21:51:28.914841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.134 ms 00:19:05.844 [2024-11-27 21:51:28.914850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.844 [2024-11-27 21:51:28.917204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.844 [2024-11-27 21:51:28.917242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:05.844 [2024-11-27 21:51:28.917253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.302 ms 00:19:05.844 [2024-11-27 21:51:28.917265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.844 [2024-11-27 21:51:28.917423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.844 [2024-11-27 21:51:28.917435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:05.844 [2024-11-27 21:51:28.917444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:19:05.844 [2024-11-27 21:51:28.917456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.844 [2024-11-27 21:51:28.925185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.844 [2024-11-27 21:51:28.925242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:05.844 [2024-11-27 21:51:28.925253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.844 [2024-11-27 21:51:28.925263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.844 [2024-11-27 21:51:28.925331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.844 [2024-11-27 21:51:28.925414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:05.844 [2024-11-27 21:51:28.925422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.844 [2024-11-27 21:51:28.925431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.844 [2024-11-27 21:51:28.925553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.844 [2024-11-27 21:51:28.925567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:05.844 [2024-11-27 21:51:28.925576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.844 [2024-11-27 21:51:28.925586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.844 [2024-11-27 21:51:28.925607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.844 [2024-11-27 21:51:28.925621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:05.844 [2024-11-27 21:51:28.925629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.844 [2024-11-27 21:51:28.925645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.844 [2024-11-27 21:51:28.939868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.844 [2024-11-27 21:51:28.939935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:05.844 [2024-11-27 21:51:28.939952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.844 [2024-11-27 21:51:28.939962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.844 [2024-11-27 21:51:28.952202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.844 [2024-11-27 21:51:28.952269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:05.844 [2024-11-27 21:51:28.952286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.844 [2024-11-27 21:51:28.952298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.844 [2024-11-27 21:51:28.952399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.844 [2024-11-27 21:51:28.952415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:05.844 [2024-11-27 21:51:28.952424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.844 [2024-11-27 21:51:28.952435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.844 [2024-11-27 21:51:28.952497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.844 [2024-11-27 21:51:28.952510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:05.844 [2024-11-27 21:51:28.952521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.844 [2024-11-27 21:51:28.952535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.844 [2024-11-27 21:51:28.952614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.844 [2024-11-27 21:51:28.952627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:05.844 [2024-11-27 21:51:28.952636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.844 [2024-11-27 21:51:28.952648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.844 [2024-11-27 21:51:28.952677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.844 [2024-11-27 21:51:28.952690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:05.844 [2024-11-27 21:51:28.952698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.844 [2024-11-27 21:51:28.952710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.844 [2024-11-27 21:51:28.952755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.844 [2024-11-27 21:51:28.952767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:05.844 [2024-11-27 21:51:28.952775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.844 [2024-11-27 21:51:28.952790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.844 [2024-11-27 21:51:28.952836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:05.844 [2024-11-27 21:51:28.952850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:05.844 [2024-11-27 21:51:28.952858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:05.844 [2024-11-27 21:51:28.952874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.844 [2024-11-27 21:51:28.953014] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 286.509 ms, result 0 00:19:05.844 true 00:19:06.105 21:51:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 86800 00:19:06.105 21:51:28 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 86800 ']' 00:19:06.105 21:51:28 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 86800 00:19:06.105 21:51:28 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:19:06.105 21:51:28 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:06.105 21:51:28 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86800 00:19:06.105 21:51:29 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:06.105 21:51:29 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:06.105 21:51:29 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86800' 00:19:06.105 killing process with pid 86800 00:19:06.105 Received shutdown signal, test time was about 4.000000 seconds 00:19:06.105 00:19:06.105 Latency(us) 00:19:06.105 [2024-11-27T21:51:29.226Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:06.105 [2024-11-27T21:51:29.226Z] =================================================================================================================== 00:19:06.105 [2024-11-27T21:51:29.226Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:06.105 21:51:29 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 86800 00:19:06.105 21:51:29 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 86800 00:19:06.105 21:51:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:06.105 21:51:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:19:06.105 21:51:29 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:06.105 Remove shared memory files 00:19:06.105 21:51:29 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:19:06.365 21:51:29 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:19:06.365 21:51:29 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:19:06.365 21:51:29 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:06.365 21:51:29 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:19:06.365 ************************************ 00:19:06.365 END TEST ftl_bdevperf 00:19:06.365 ************************************ 00:19:06.365 00:19:06.365 real 0m21.304s 00:19:06.365 user 0m23.996s 00:19:06.365 sys 0m0.954s 00:19:06.365 21:51:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:06.365 21:51:29 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:06.365 21:51:29 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:06.365 21:51:29 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:06.365 21:51:29 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:06.365 21:51:29 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:06.365 ************************************ 00:19:06.365 START TEST ftl_trim 00:19:06.365 ************************************ 00:19:06.365 21:51:29 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:06.365 * Looking for test storage... 00:19:06.365 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:06.365 21:51:29 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:06.365 21:51:29 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:19:06.365 21:51:29 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:06.365 21:51:29 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:06.365 21:51:29 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:06.365 21:51:29 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:06.365 21:51:29 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:06.365 21:51:29 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:19:06.365 21:51:29 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:19:06.365 21:51:29 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:19:06.365 21:51:29 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:19:06.365 21:51:29 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:19:06.365 21:51:29 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:19:06.365 21:51:29 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:19:06.365 21:51:29 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:06.365 21:51:29 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:19:06.365 21:51:29 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:19:06.365 21:51:29 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:06.365 21:51:29 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:06.365 21:51:29 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:19:06.365 21:51:29 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:19:06.365 21:51:29 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:06.365 21:51:29 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:19:06.365 21:51:29 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:19:06.365 21:51:29 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:19:06.365 21:51:29 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:19:06.365 21:51:29 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:06.365 21:51:29 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:19:06.365 21:51:29 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:19:06.365 21:51:29 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:06.365 21:51:29 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:06.365 21:51:29 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:19:06.365 21:51:29 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:06.365 21:51:29 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:06.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.365 --rc genhtml_branch_coverage=1 00:19:06.365 --rc genhtml_function_coverage=1 00:19:06.365 --rc genhtml_legend=1 00:19:06.365 --rc geninfo_all_blocks=1 00:19:06.365 --rc geninfo_unexecuted_blocks=1 00:19:06.365 00:19:06.365 ' 00:19:06.365 21:51:29 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:06.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.365 --rc genhtml_branch_coverage=1 00:19:06.365 --rc genhtml_function_coverage=1 00:19:06.365 --rc genhtml_legend=1 00:19:06.365 --rc geninfo_all_blocks=1 00:19:06.365 --rc geninfo_unexecuted_blocks=1 00:19:06.365 00:19:06.365 ' 00:19:06.365 21:51:29 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:06.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.365 --rc genhtml_branch_coverage=1 00:19:06.365 --rc genhtml_function_coverage=1 00:19:06.365 --rc genhtml_legend=1 00:19:06.365 --rc geninfo_all_blocks=1 00:19:06.365 --rc geninfo_unexecuted_blocks=1 00:19:06.365 00:19:06.365 ' 00:19:06.365 21:51:29 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:06.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.365 --rc genhtml_branch_coverage=1 00:19:06.365 --rc genhtml_function_coverage=1 00:19:06.365 --rc genhtml_legend=1 00:19:06.365 --rc geninfo_all_blocks=1 00:19:06.365 --rc geninfo_unexecuted_blocks=1 00:19:06.365 00:19:06.365 ' 00:19:06.365 21:51:29 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:06.365 21:51:29 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:19:06.365 21:51:29 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:06.365 21:51:29 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:06.365 21:51:29 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:06.365 21:51:29 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:06.365 21:51:29 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:06.365 21:51:29 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:06.365 21:51:29 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:06.365 21:51:29 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:06.365 21:51:29 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:06.365 21:51:29 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:06.365 21:51:29 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:06.365 21:51:29 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:06.365 21:51:29 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:06.365 21:51:29 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:06.365 21:51:29 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:06.365 21:51:29 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:06.365 21:51:29 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:06.365 21:51:29 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:06.365 21:51:29 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:06.365 21:51:29 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:06.365 21:51:29 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:06.365 21:51:29 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:06.365 21:51:29 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:06.365 21:51:29 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:06.365 21:51:29 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:06.365 21:51:29 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:06.365 21:51:29 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:06.365 21:51:29 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:06.365 21:51:29 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:19:06.365 21:51:29 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:19:06.365 21:51:29 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:19:06.365 21:51:29 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:19:06.365 21:51:29 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:19:06.365 21:51:29 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:19:06.365 21:51:29 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:19:06.365 21:51:29 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:19:06.365 21:51:29 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:06.365 21:51:29 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:06.365 21:51:29 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:06.365 21:51:29 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=87141 00:19:06.365 21:51:29 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 87141 00:19:06.365 21:51:29 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 87141 ']' 00:19:06.365 21:51:29 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.365 21:51:29 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:06.625 21:51:29 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.625 21:51:29 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:06.625 21:51:29 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:06.625 21:51:29 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:06.625 [2024-11-27 21:51:29.568533] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:19:06.625 [2024-11-27 21:51:29.568693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87141 ] 00:19:06.625 [2024-11-27 21:51:29.717797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:06.887 [2024-11-27 21:51:29.750622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:06.887 [2024-11-27 21:51:29.750973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:06.887 [2024-11-27 21:51:29.751021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.459 21:51:30 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:07.459 21:51:30 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:07.459 21:51:30 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:07.459 21:51:30 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:19:07.459 21:51:30 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:07.459 21:51:30 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:19:07.459 21:51:30 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:19:07.459 21:51:30 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:07.720 21:51:30 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:07.720 21:51:30 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:19:07.720 21:51:30 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:07.720 21:51:30 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:07.720 21:51:30 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:07.720 21:51:30 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:07.720 21:51:30 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:07.720 21:51:30 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:07.981 21:51:30 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:07.981 { 00:19:07.981 "name": "nvme0n1", 00:19:07.981 "aliases": [ 00:19:07.981 "26143942-5439-40bf-9a74-e387e9c87337" 00:19:07.981 ], 00:19:07.981 "product_name": "NVMe disk", 00:19:07.981 "block_size": 4096, 00:19:07.981 "num_blocks": 1310720, 00:19:07.981 "uuid": "26143942-5439-40bf-9a74-e387e9c87337", 00:19:07.981 "numa_id": -1, 00:19:07.981 "assigned_rate_limits": { 00:19:07.981 "rw_ios_per_sec": 0, 00:19:07.981 "rw_mbytes_per_sec": 0, 00:19:07.981 "r_mbytes_per_sec": 0, 00:19:07.981 "w_mbytes_per_sec": 0 00:19:07.981 }, 00:19:07.981 "claimed": true, 00:19:07.981 "claim_type": "read_many_write_one", 00:19:07.981 "zoned": false, 00:19:07.981 "supported_io_types": { 00:19:07.981 "read": true, 00:19:07.981 "write": true, 00:19:07.981 "unmap": true, 00:19:07.981 "flush": true, 00:19:07.981 "reset": true, 00:19:07.981 "nvme_admin": true, 00:19:07.981 "nvme_io": true, 00:19:07.981 "nvme_io_md": false, 00:19:07.981 "write_zeroes": true, 00:19:07.981 "zcopy": false, 00:19:07.981 "get_zone_info": false, 00:19:07.981 "zone_management": false, 00:19:07.981 "zone_append": false, 00:19:07.981 "compare": true, 00:19:07.981 "compare_and_write": false, 00:19:07.981 "abort": true, 00:19:07.981 "seek_hole": false, 00:19:07.981 "seek_data": false, 00:19:07.981 "copy": true, 00:19:07.981 "nvme_iov_md": false 00:19:07.981 }, 00:19:07.981 "driver_specific": { 00:19:07.981 "nvme": [ 00:19:07.981 { 00:19:07.981 "pci_address": "0000:00:11.0", 00:19:07.981 "trid": { 00:19:07.981 "trtype": "PCIe", 00:19:07.981 "traddr": "0000:00:11.0" 00:19:07.981 }, 00:19:07.981 "ctrlr_data": { 00:19:07.981 "cntlid": 0, 00:19:07.981 "vendor_id": "0x1b36", 00:19:07.981 "model_number": "QEMU NVMe Ctrl", 00:19:07.981 "serial_number": "12341", 00:19:07.981 "firmware_revision": "8.0.0", 00:19:07.981 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:07.981 "oacs": { 00:19:07.981 "security": 0, 00:19:07.981 "format": 1, 00:19:07.981 "firmware": 0, 00:19:07.981 "ns_manage": 1 00:19:07.981 }, 00:19:07.981 "multi_ctrlr": false, 00:19:07.981 "ana_reporting": false 00:19:07.981 }, 00:19:07.981 "vs": { 00:19:07.981 "nvme_version": "1.4" 00:19:07.981 }, 00:19:07.981 "ns_data": { 00:19:07.981 "id": 1, 00:19:07.981 "can_share": false 00:19:07.981 } 00:19:07.981 } 00:19:07.981 ], 00:19:07.981 "mp_policy": "active_passive" 00:19:07.981 } 00:19:07.981 } 00:19:07.981 ]' 00:19:07.981 21:51:30 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:07.981 21:51:30 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:07.981 21:51:30 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:07.981 21:51:30 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:07.981 21:51:30 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:07.981 21:51:30 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:19:07.981 21:51:30 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:19:07.981 21:51:31 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:07.981 21:51:31 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:19:07.981 21:51:31 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:07.981 21:51:31 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:08.241 21:51:31 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=483523ae-5747-4114-b1ea-f0528d3378ed 00:19:08.241 21:51:31 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:19:08.242 21:51:31 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 483523ae-5747-4114-b1ea-f0528d3378ed 00:19:08.501 21:51:31 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:08.762 21:51:31 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=3fca38ee-72cf-45ff-9ee7-8ee3534a33e6 00:19:08.762 21:51:31 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 3fca38ee-72cf-45ff-9ee7-8ee3534a33e6 00:19:09.024 21:51:31 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=27c188c3-226b-49b0-8eb6-4a001a14847f 00:19:09.024 21:51:31 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 27c188c3-226b-49b0-8eb6-4a001a14847f 00:19:09.024 21:51:31 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:19:09.024 21:51:31 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:09.024 21:51:31 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=27c188c3-226b-49b0-8eb6-4a001a14847f 00:19:09.024 21:51:31 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:19:09.024 21:51:31 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 27c188c3-226b-49b0-8eb6-4a001a14847f 00:19:09.024 21:51:31 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=27c188c3-226b-49b0-8eb6-4a001a14847f 00:19:09.024 21:51:31 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:09.024 21:51:31 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:09.024 21:51:31 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:09.024 21:51:31 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 27c188c3-226b-49b0-8eb6-4a001a14847f 00:19:09.024 21:51:32 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:09.024 { 00:19:09.024 "name": "27c188c3-226b-49b0-8eb6-4a001a14847f", 00:19:09.024 "aliases": [ 00:19:09.024 "lvs/nvme0n1p0" 00:19:09.024 ], 00:19:09.024 "product_name": "Logical Volume", 00:19:09.024 "block_size": 4096, 00:19:09.024 "num_blocks": 26476544, 00:19:09.024 "uuid": "27c188c3-226b-49b0-8eb6-4a001a14847f", 00:19:09.024 "assigned_rate_limits": { 00:19:09.024 "rw_ios_per_sec": 0, 00:19:09.024 "rw_mbytes_per_sec": 0, 00:19:09.024 "r_mbytes_per_sec": 0, 00:19:09.024 "w_mbytes_per_sec": 0 00:19:09.024 }, 00:19:09.024 "claimed": false, 00:19:09.024 "zoned": false, 00:19:09.024 "supported_io_types": { 00:19:09.024 "read": true, 00:19:09.024 "write": true, 00:19:09.024 "unmap": true, 00:19:09.024 "flush": false, 00:19:09.024 "reset": true, 00:19:09.024 "nvme_admin": false, 00:19:09.024 "nvme_io": false, 00:19:09.024 "nvme_io_md": false, 00:19:09.024 "write_zeroes": true, 00:19:09.024 "zcopy": false, 00:19:09.024 "get_zone_info": false, 00:19:09.024 "zone_management": false, 00:19:09.024 "zone_append": false, 00:19:09.024 "compare": false, 00:19:09.024 "compare_and_write": false, 00:19:09.024 "abort": false, 00:19:09.024 "seek_hole": true, 00:19:09.024 "seek_data": true, 00:19:09.024 "copy": false, 00:19:09.024 "nvme_iov_md": false 00:19:09.024 }, 00:19:09.024 "driver_specific": { 00:19:09.024 "lvol": { 00:19:09.024 "lvol_store_uuid": "3fca38ee-72cf-45ff-9ee7-8ee3534a33e6", 00:19:09.024 "base_bdev": "nvme0n1", 00:19:09.024 "thin_provision": true, 00:19:09.024 "num_allocated_clusters": 0, 00:19:09.024 "snapshot": false, 00:19:09.024 "clone": false, 00:19:09.024 "esnap_clone": false 00:19:09.024 } 00:19:09.024 } 00:19:09.024 } 00:19:09.024 ]' 00:19:09.024 21:51:32 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:09.284 21:51:32 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:09.284 21:51:32 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:09.284 21:51:32 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:09.284 21:51:32 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:09.284 21:51:32 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:09.284 21:51:32 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:19:09.284 21:51:32 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:19:09.284 21:51:32 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:09.543 21:51:32 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:09.543 21:51:32 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:09.543 21:51:32 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 27c188c3-226b-49b0-8eb6-4a001a14847f 00:19:09.543 21:51:32 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=27c188c3-226b-49b0-8eb6-4a001a14847f 00:19:09.543 21:51:32 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:09.543 21:51:32 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:09.543 21:51:32 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:09.543 21:51:32 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 27c188c3-226b-49b0-8eb6-4a001a14847f 00:19:09.543 21:51:32 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:09.543 { 00:19:09.543 "name": "27c188c3-226b-49b0-8eb6-4a001a14847f", 00:19:09.543 "aliases": [ 00:19:09.543 "lvs/nvme0n1p0" 00:19:09.543 ], 00:19:09.543 "product_name": "Logical Volume", 00:19:09.543 "block_size": 4096, 00:19:09.543 "num_blocks": 26476544, 00:19:09.543 "uuid": "27c188c3-226b-49b0-8eb6-4a001a14847f", 00:19:09.543 "assigned_rate_limits": { 00:19:09.543 "rw_ios_per_sec": 0, 00:19:09.543 "rw_mbytes_per_sec": 0, 00:19:09.543 "r_mbytes_per_sec": 0, 00:19:09.543 "w_mbytes_per_sec": 0 00:19:09.543 }, 00:19:09.543 "claimed": false, 00:19:09.543 "zoned": false, 00:19:09.543 "supported_io_types": { 00:19:09.543 "read": true, 00:19:09.543 "write": true, 00:19:09.543 "unmap": true, 00:19:09.543 "flush": false, 00:19:09.543 "reset": true, 00:19:09.543 "nvme_admin": false, 00:19:09.543 "nvme_io": false, 00:19:09.543 "nvme_io_md": false, 00:19:09.543 "write_zeroes": true, 00:19:09.543 "zcopy": false, 00:19:09.543 "get_zone_info": false, 00:19:09.543 "zone_management": false, 00:19:09.543 "zone_append": false, 00:19:09.543 "compare": false, 00:19:09.543 "compare_and_write": false, 00:19:09.543 "abort": false, 00:19:09.543 "seek_hole": true, 00:19:09.543 "seek_data": true, 00:19:09.543 "copy": false, 00:19:09.543 "nvme_iov_md": false 00:19:09.543 }, 00:19:09.543 "driver_specific": { 00:19:09.543 "lvol": { 00:19:09.543 "lvol_store_uuid": "3fca38ee-72cf-45ff-9ee7-8ee3534a33e6", 00:19:09.543 "base_bdev": "nvme0n1", 00:19:09.543 "thin_provision": true, 00:19:09.543 "num_allocated_clusters": 0, 00:19:09.543 "snapshot": false, 00:19:09.543 "clone": false, 00:19:09.543 "esnap_clone": false 00:19:09.543 } 00:19:09.543 } 00:19:09.543 } 00:19:09.543 ]' 00:19:09.543 21:51:32 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:09.803 21:51:32 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:09.803 21:51:32 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:09.803 21:51:32 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:09.803 21:51:32 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:09.803 21:51:32 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:09.803 21:51:32 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:19:09.803 21:51:32 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:09.803 21:51:32 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:19:09.803 21:51:32 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:19:09.803 21:51:32 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 27c188c3-226b-49b0-8eb6-4a001a14847f 00:19:09.803 21:51:32 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=27c188c3-226b-49b0-8eb6-4a001a14847f 00:19:09.803 21:51:32 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:09.803 21:51:32 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:09.803 21:51:32 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:09.803 21:51:32 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 27c188c3-226b-49b0-8eb6-4a001a14847f 00:19:10.064 21:51:33 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:10.064 { 00:19:10.064 "name": "27c188c3-226b-49b0-8eb6-4a001a14847f", 00:19:10.064 "aliases": [ 00:19:10.064 "lvs/nvme0n1p0" 00:19:10.064 ], 00:19:10.064 "product_name": "Logical Volume", 00:19:10.064 "block_size": 4096, 00:19:10.064 "num_blocks": 26476544, 00:19:10.064 "uuid": "27c188c3-226b-49b0-8eb6-4a001a14847f", 00:19:10.064 "assigned_rate_limits": { 00:19:10.064 "rw_ios_per_sec": 0, 00:19:10.064 "rw_mbytes_per_sec": 0, 00:19:10.064 "r_mbytes_per_sec": 0, 00:19:10.064 "w_mbytes_per_sec": 0 00:19:10.064 }, 00:19:10.064 "claimed": false, 00:19:10.064 "zoned": false, 00:19:10.064 "supported_io_types": { 00:19:10.064 "read": true, 00:19:10.064 "write": true, 00:19:10.064 "unmap": true, 00:19:10.064 "flush": false, 00:19:10.064 "reset": true, 00:19:10.064 "nvme_admin": false, 00:19:10.064 "nvme_io": false, 00:19:10.064 "nvme_io_md": false, 00:19:10.064 "write_zeroes": true, 00:19:10.064 "zcopy": false, 00:19:10.064 "get_zone_info": false, 00:19:10.064 "zone_management": false, 00:19:10.064 "zone_append": false, 00:19:10.064 "compare": false, 00:19:10.064 "compare_and_write": false, 00:19:10.064 "abort": false, 00:19:10.064 "seek_hole": true, 00:19:10.064 "seek_data": true, 00:19:10.064 "copy": false, 00:19:10.064 "nvme_iov_md": false 00:19:10.064 }, 00:19:10.064 "driver_specific": { 00:19:10.064 "lvol": { 00:19:10.064 "lvol_store_uuid": "3fca38ee-72cf-45ff-9ee7-8ee3534a33e6", 00:19:10.064 "base_bdev": "nvme0n1", 00:19:10.064 "thin_provision": true, 00:19:10.064 "num_allocated_clusters": 0, 00:19:10.064 "snapshot": false, 00:19:10.064 "clone": false, 00:19:10.064 "esnap_clone": false 00:19:10.064 } 00:19:10.064 } 00:19:10.064 } 00:19:10.064 ]' 00:19:10.064 21:51:33 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:10.064 21:51:33 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:10.064 21:51:33 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:10.326 21:51:33 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:10.326 21:51:33 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:10.326 21:51:33 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:10.326 21:51:33 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:19:10.326 21:51:33 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 27c188c3-226b-49b0-8eb6-4a001a14847f -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:19:10.326 [2024-11-27 21:51:33.370265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.326 [2024-11-27 21:51:33.370311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:10.326 [2024-11-27 21:51:33.370323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:10.326 [2024-11-27 21:51:33.370347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.326 [2024-11-27 21:51:33.372764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.326 [2024-11-27 21:51:33.372798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:10.326 [2024-11-27 21:51:33.372808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.382 ms 00:19:10.326 [2024-11-27 21:51:33.372819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.326 [2024-11-27 21:51:33.372910] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:10.326 [2024-11-27 21:51:33.373209] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:10.326 [2024-11-27 21:51:33.373247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.326 [2024-11-27 21:51:33.373258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:10.326 [2024-11-27 21:51:33.373268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:19:10.326 [2024-11-27 21:51:33.373286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.326 [2024-11-27 21:51:33.373418] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 78ee2df8-80e5-4b2c-bba3-34ec78c869a1 00:19:10.326 [2024-11-27 21:51:33.374555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.326 [2024-11-27 21:51:33.374584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:10.326 [2024-11-27 21:51:33.374595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:19:10.326 [2024-11-27 21:51:33.374602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.326 [2024-11-27 21:51:33.380151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.326 [2024-11-27 21:51:33.380179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:10.326 [2024-11-27 21:51:33.380190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.454 ms 00:19:10.326 [2024-11-27 21:51:33.380197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.326 [2024-11-27 21:51:33.380317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.326 [2024-11-27 21:51:33.380327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:10.326 [2024-11-27 21:51:33.380352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:19:10.326 [2024-11-27 21:51:33.380359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.326 [2024-11-27 21:51:33.380398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.326 [2024-11-27 21:51:33.380406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:10.326 [2024-11-27 21:51:33.380416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:10.326 [2024-11-27 21:51:33.380423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.326 [2024-11-27 21:51:33.380456] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:10.326 [2024-11-27 21:51:33.381910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.326 [2024-11-27 21:51:33.381939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:10.326 [2024-11-27 21:51:33.381950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.460 ms 00:19:10.326 [2024-11-27 21:51:33.381958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.326 [2024-11-27 21:51:33.382003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.326 [2024-11-27 21:51:33.382012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:10.326 [2024-11-27 21:51:33.382020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:10.326 [2024-11-27 21:51:33.382031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.326 [2024-11-27 21:51:33.382059] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:10.326 [2024-11-27 21:51:33.382218] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:10.326 [2024-11-27 21:51:33.382230] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:10.326 [2024-11-27 21:51:33.382242] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:10.326 [2024-11-27 21:51:33.382252] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:10.326 [2024-11-27 21:51:33.382262] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:10.326 [2024-11-27 21:51:33.382270] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:10.326 [2024-11-27 21:51:33.382280] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:10.326 [2024-11-27 21:51:33.382288] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:10.326 [2024-11-27 21:51:33.382300] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:10.326 [2024-11-27 21:51:33.382307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.326 [2024-11-27 21:51:33.382316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:10.326 [2024-11-27 21:51:33.382324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:19:10.326 [2024-11-27 21:51:33.382333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.326 [2024-11-27 21:51:33.382439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.326 [2024-11-27 21:51:33.382451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:10.326 [2024-11-27 21:51:33.382458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:10.326 [2024-11-27 21:51:33.382466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.326 [2024-11-27 21:51:33.382589] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:10.326 [2024-11-27 21:51:33.382606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:10.326 [2024-11-27 21:51:33.382616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:10.326 [2024-11-27 21:51:33.382626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:10.326 [2024-11-27 21:51:33.382634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:10.326 [2024-11-27 21:51:33.382643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:10.327 [2024-11-27 21:51:33.382651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:10.327 [2024-11-27 21:51:33.382661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:10.327 [2024-11-27 21:51:33.382669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:10.327 [2024-11-27 21:51:33.382678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:10.327 [2024-11-27 21:51:33.382685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:10.327 [2024-11-27 21:51:33.382695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:10.327 [2024-11-27 21:51:33.382702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:10.327 [2024-11-27 21:51:33.382715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:10.327 [2024-11-27 21:51:33.382723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:10.327 [2024-11-27 21:51:33.382732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:10.327 [2024-11-27 21:51:33.382739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:10.327 [2024-11-27 21:51:33.382749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:10.327 [2024-11-27 21:51:33.382757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:10.327 [2024-11-27 21:51:33.382766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:10.327 [2024-11-27 21:51:33.382774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:10.327 [2024-11-27 21:51:33.382794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:10.327 [2024-11-27 21:51:33.382802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:10.327 [2024-11-27 21:51:33.382811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:10.327 [2024-11-27 21:51:33.382819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:10.327 [2024-11-27 21:51:33.382828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:10.327 [2024-11-27 21:51:33.382835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:10.327 [2024-11-27 21:51:33.382844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:10.327 [2024-11-27 21:51:33.382852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:10.327 [2024-11-27 21:51:33.382862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:10.327 [2024-11-27 21:51:33.382870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:10.327 [2024-11-27 21:51:33.382879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:10.327 [2024-11-27 21:51:33.382886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:10.327 [2024-11-27 21:51:33.382895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:10.327 [2024-11-27 21:51:33.382902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:10.327 [2024-11-27 21:51:33.382911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:10.327 [2024-11-27 21:51:33.382918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:10.327 [2024-11-27 21:51:33.382929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:10.327 [2024-11-27 21:51:33.382936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:10.327 [2024-11-27 21:51:33.382945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:10.327 [2024-11-27 21:51:33.382952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:10.327 [2024-11-27 21:51:33.382961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:10.327 [2024-11-27 21:51:33.382968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:10.327 [2024-11-27 21:51:33.382977] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:10.327 [2024-11-27 21:51:33.382985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:10.327 [2024-11-27 21:51:33.382996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:10.327 [2024-11-27 21:51:33.383004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:10.327 [2024-11-27 21:51:33.383014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:10.327 [2024-11-27 21:51:33.383022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:10.327 [2024-11-27 21:51:33.383032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:10.327 [2024-11-27 21:51:33.383040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:10.327 [2024-11-27 21:51:33.383049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:10.327 [2024-11-27 21:51:33.383056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:10.327 [2024-11-27 21:51:33.383068] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:10.327 [2024-11-27 21:51:33.383078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:10.327 [2024-11-27 21:51:33.383098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:10.327 [2024-11-27 21:51:33.383107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:10.327 [2024-11-27 21:51:33.383116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:10.327 [2024-11-27 21:51:33.383124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:10.327 [2024-11-27 21:51:33.383134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:10.327 [2024-11-27 21:51:33.383141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:10.327 [2024-11-27 21:51:33.383154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:10.327 [2024-11-27 21:51:33.383163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:10.327 [2024-11-27 21:51:33.383171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:10.327 [2024-11-27 21:51:33.383178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:10.327 [2024-11-27 21:51:33.383186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:10.327 [2024-11-27 21:51:33.383194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:10.327 [2024-11-27 21:51:33.383202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:10.327 [2024-11-27 21:51:33.383210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:10.327 [2024-11-27 21:51:33.383218] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:10.327 [2024-11-27 21:51:33.383228] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:10.327 [2024-11-27 21:51:33.383237] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:10.327 [2024-11-27 21:51:33.383245] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:10.327 [2024-11-27 21:51:33.383253] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:10.327 [2024-11-27 21:51:33.383261] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:10.327 [2024-11-27 21:51:33.383270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.327 [2024-11-27 21:51:33.383277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:10.327 [2024-11-27 21:51:33.383288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.746 ms 00:19:10.327 [2024-11-27 21:51:33.383295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.327 [2024-11-27 21:51:33.383400] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:10.327 [2024-11-27 21:51:33.383410] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:12.857 [2024-11-27 21:51:35.924326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.857 [2024-11-27 21:51:35.924397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:12.857 [2024-11-27 21:51:35.924416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2540.911 ms 00:19:12.857 [2024-11-27 21:51:35.924424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.857 [2024-11-27 21:51:35.933058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.857 [2024-11-27 21:51:35.933099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:12.857 [2024-11-27 21:51:35.933112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.524 ms 00:19:12.857 [2024-11-27 21:51:35.933119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.857 [2024-11-27 21:51:35.933255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.857 [2024-11-27 21:51:35.933265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:12.858 [2024-11-27 21:51:35.933278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:12.858 [2024-11-27 21:51:35.933285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.858 [2024-11-27 21:51:35.957228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.858 [2024-11-27 21:51:35.957320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:12.858 [2024-11-27 21:51:35.957383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.896 ms 00:19:12.858 [2024-11-27 21:51:35.957405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.858 [2024-11-27 21:51:35.957594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.858 [2024-11-27 21:51:35.957645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:12.858 [2024-11-27 21:51:35.957673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:12.858 [2024-11-27 21:51:35.957735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.858 [2024-11-27 21:51:35.958171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.858 [2024-11-27 21:51:35.958199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:12.858 [2024-11-27 21:51:35.958210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.362 ms 00:19:12.858 [2024-11-27 21:51:35.958218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.858 [2024-11-27 21:51:35.958363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.858 [2024-11-27 21:51:35.958378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:12.858 [2024-11-27 21:51:35.958391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:19:12.858 [2024-11-27 21:51:35.958398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.858 [2024-11-27 21:51:35.964044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.858 [2024-11-27 21:51:35.964076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:12.858 [2024-11-27 21:51:35.964097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.610 ms 00:19:12.858 [2024-11-27 21:51:35.964105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.858 [2024-11-27 21:51:35.972377] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:13.143 [2024-11-27 21:51:35.987057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.143 [2024-11-27 21:51:35.987094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:13.143 [2024-11-27 21:51:35.987105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.857 ms 00:19:13.143 [2024-11-27 21:51:35.987114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.143 [2024-11-27 21:51:36.043736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.143 [2024-11-27 21:51:36.043777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:13.143 [2024-11-27 21:51:36.043788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.535 ms 00:19:13.143 [2024-11-27 21:51:36.043804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.143 [2024-11-27 21:51:36.044000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.143 [2024-11-27 21:51:36.044012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:13.143 [2024-11-27 21:51:36.044021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:19:13.143 [2024-11-27 21:51:36.044030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.143 [2024-11-27 21:51:36.047123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.143 [2024-11-27 21:51:36.047159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:13.144 [2024-11-27 21:51:36.047170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.048 ms 00:19:13.144 [2024-11-27 21:51:36.047179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.144 [2024-11-27 21:51:36.051231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.144 [2024-11-27 21:51:36.051395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:13.144 [2024-11-27 21:51:36.051434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.959 ms 00:19:13.144 [2024-11-27 21:51:36.051462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.144 [2024-11-27 21:51:36.052491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.144 [2024-11-27 21:51:36.052559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:13.144 [2024-11-27 21:51:36.052587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.820 ms 00:19:13.144 [2024-11-27 21:51:36.052619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.144 [2024-11-27 21:51:36.083032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.144 [2024-11-27 21:51:36.083070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:13.144 [2024-11-27 21:51:36.083081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.299 ms 00:19:13.144 [2024-11-27 21:51:36.083093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.144 [2024-11-27 21:51:36.086843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.144 [2024-11-27 21:51:36.086879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:13.144 [2024-11-27 21:51:36.086888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.684 ms 00:19:13.144 [2024-11-27 21:51:36.086908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.144 [2024-11-27 21:51:36.089777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.144 [2024-11-27 21:51:36.089811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:13.144 [2024-11-27 21:51:36.089820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.837 ms 00:19:13.144 [2024-11-27 21:51:36.089828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.144 [2024-11-27 21:51:36.093052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.144 [2024-11-27 21:51:36.093086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:13.144 [2024-11-27 21:51:36.093095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.191 ms 00:19:13.144 [2024-11-27 21:51:36.093107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.144 [2024-11-27 21:51:36.093142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.144 [2024-11-27 21:51:36.093153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:13.144 [2024-11-27 21:51:36.093163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:13.144 [2024-11-27 21:51:36.093173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.144 [2024-11-27 21:51:36.093247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.144 [2024-11-27 21:51:36.093266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:13.144 [2024-11-27 21:51:36.093275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:13.144 [2024-11-27 21:51:36.093284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.144 [2024-11-27 21:51:36.094168] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:13.144 [2024-11-27 21:51:36.095122] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2723.575 ms, result 0 00:19:13.144 [2024-11-27 21:51:36.095759] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:13.144 { 00:19:13.144 "name": "ftl0", 00:19:13.144 "uuid": "78ee2df8-80e5-4b2c-bba3-34ec78c869a1" 00:19:13.144 } 00:19:13.144 21:51:36 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:19:13.144 21:51:36 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:19:13.144 21:51:36 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:13.144 21:51:36 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:19:13.144 21:51:36 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:13.144 21:51:36 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:13.144 21:51:36 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:13.424 21:51:36 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:13.424 [ 00:19:13.424 { 00:19:13.424 "name": "ftl0", 00:19:13.424 "aliases": [ 00:19:13.424 "78ee2df8-80e5-4b2c-bba3-34ec78c869a1" 00:19:13.424 ], 00:19:13.424 "product_name": "FTL disk", 00:19:13.424 "block_size": 4096, 00:19:13.424 "num_blocks": 23592960, 00:19:13.424 "uuid": "78ee2df8-80e5-4b2c-bba3-34ec78c869a1", 00:19:13.424 "assigned_rate_limits": { 00:19:13.424 "rw_ios_per_sec": 0, 00:19:13.424 "rw_mbytes_per_sec": 0, 00:19:13.424 "r_mbytes_per_sec": 0, 00:19:13.424 "w_mbytes_per_sec": 0 00:19:13.424 }, 00:19:13.424 "claimed": false, 00:19:13.424 "zoned": false, 00:19:13.424 "supported_io_types": { 00:19:13.424 "read": true, 00:19:13.424 "write": true, 00:19:13.424 "unmap": true, 00:19:13.424 "flush": true, 00:19:13.424 "reset": false, 00:19:13.424 "nvme_admin": false, 00:19:13.424 "nvme_io": false, 00:19:13.424 "nvme_io_md": false, 00:19:13.424 "write_zeroes": true, 00:19:13.424 "zcopy": false, 00:19:13.424 "get_zone_info": false, 00:19:13.424 "zone_management": false, 00:19:13.424 "zone_append": false, 00:19:13.424 "compare": false, 00:19:13.424 "compare_and_write": false, 00:19:13.424 "abort": false, 00:19:13.424 "seek_hole": false, 00:19:13.424 "seek_data": false, 00:19:13.424 "copy": false, 00:19:13.424 "nvme_iov_md": false 00:19:13.424 }, 00:19:13.424 "driver_specific": { 00:19:13.424 "ftl": { 00:19:13.424 "base_bdev": "27c188c3-226b-49b0-8eb6-4a001a14847f", 00:19:13.424 "cache": "nvc0n1p0" 00:19:13.424 } 00:19:13.424 } 00:19:13.424 } 00:19:13.424 ] 00:19:13.424 21:51:36 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:19:13.424 21:51:36 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:19:13.425 21:51:36 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:13.682 21:51:36 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:19:13.682 21:51:36 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:19:13.940 21:51:36 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:19:13.940 { 00:19:13.940 "name": "ftl0", 00:19:13.940 "aliases": [ 00:19:13.940 "78ee2df8-80e5-4b2c-bba3-34ec78c869a1" 00:19:13.940 ], 00:19:13.940 "product_name": "FTL disk", 00:19:13.940 "block_size": 4096, 00:19:13.940 "num_blocks": 23592960, 00:19:13.940 "uuid": "78ee2df8-80e5-4b2c-bba3-34ec78c869a1", 00:19:13.940 "assigned_rate_limits": { 00:19:13.940 "rw_ios_per_sec": 0, 00:19:13.940 "rw_mbytes_per_sec": 0, 00:19:13.940 "r_mbytes_per_sec": 0, 00:19:13.940 "w_mbytes_per_sec": 0 00:19:13.940 }, 00:19:13.940 "claimed": false, 00:19:13.940 "zoned": false, 00:19:13.940 "supported_io_types": { 00:19:13.940 "read": true, 00:19:13.940 "write": true, 00:19:13.940 "unmap": true, 00:19:13.940 "flush": true, 00:19:13.940 "reset": false, 00:19:13.940 "nvme_admin": false, 00:19:13.940 "nvme_io": false, 00:19:13.940 "nvme_io_md": false, 00:19:13.940 "write_zeroes": true, 00:19:13.940 "zcopy": false, 00:19:13.940 "get_zone_info": false, 00:19:13.940 "zone_management": false, 00:19:13.940 "zone_append": false, 00:19:13.940 "compare": false, 00:19:13.940 "compare_and_write": false, 00:19:13.940 "abort": false, 00:19:13.940 "seek_hole": false, 00:19:13.940 "seek_data": false, 00:19:13.940 "copy": false, 00:19:13.940 "nvme_iov_md": false 00:19:13.940 }, 00:19:13.940 "driver_specific": { 00:19:13.940 "ftl": { 00:19:13.940 "base_bdev": "27c188c3-226b-49b0-8eb6-4a001a14847f", 00:19:13.940 "cache": "nvc0n1p0" 00:19:13.940 } 00:19:13.940 } 00:19:13.940 } 00:19:13.940 ]' 00:19:13.940 21:51:36 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:19:13.940 21:51:36 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:19:13.940 21:51:36 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:14.200 [2024-11-27 21:51:37.139966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.200 [2024-11-27 21:51:37.140008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:14.200 [2024-11-27 21:51:37.140022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:14.200 [2024-11-27 21:51:37.140031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.200 [2024-11-27 21:51:37.140075] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:14.200 [2024-11-27 21:51:37.140528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.200 [2024-11-27 21:51:37.140552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:14.200 [2024-11-27 21:51:37.140573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 00:19:14.200 [2024-11-27 21:51:37.140582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.200 [2024-11-27 21:51:37.141165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.200 [2024-11-27 21:51:37.141189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:14.200 [2024-11-27 21:51:37.141198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.551 ms 00:19:14.200 [2024-11-27 21:51:37.141207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.200 [2024-11-27 21:51:37.144864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.200 [2024-11-27 21:51:37.144889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:14.200 [2024-11-27 21:51:37.144898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.625 ms 00:19:14.200 [2024-11-27 21:51:37.144908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.200 [2024-11-27 21:51:37.151901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.200 [2024-11-27 21:51:37.151935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:14.200 [2024-11-27 21:51:37.151944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.951 ms 00:19:14.200 [2024-11-27 21:51:37.151966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.200 [2024-11-27 21:51:37.153528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.200 [2024-11-27 21:51:37.153563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:14.200 [2024-11-27 21:51:37.153573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.472 ms 00:19:14.200 [2024-11-27 21:51:37.153581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.200 [2024-11-27 21:51:37.157464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.200 [2024-11-27 21:51:37.157499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:14.200 [2024-11-27 21:51:37.157508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.836 ms 00:19:14.200 [2024-11-27 21:51:37.157521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.200 [2024-11-27 21:51:37.157716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.201 [2024-11-27 21:51:37.157778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:14.201 [2024-11-27 21:51:37.157787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:19:14.201 [2024-11-27 21:51:37.157796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.201 [2024-11-27 21:51:37.159471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.201 [2024-11-27 21:51:37.159505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:14.201 [2024-11-27 21:51:37.159513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.642 ms 00:19:14.201 [2024-11-27 21:51:37.159525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.201 [2024-11-27 21:51:37.160833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.201 [2024-11-27 21:51:37.160866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:14.201 [2024-11-27 21:51:37.160874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.266 ms 00:19:14.201 [2024-11-27 21:51:37.160883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.201 [2024-11-27 21:51:37.162018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.201 [2024-11-27 21:51:37.162053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:14.201 [2024-11-27 21:51:37.162061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.095 ms 00:19:14.201 [2024-11-27 21:51:37.162070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.201 [2024-11-27 21:51:37.163043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.201 [2024-11-27 21:51:37.163076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:14.201 [2024-11-27 21:51:37.163085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.870 ms 00:19:14.201 [2024-11-27 21:51:37.163094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.201 [2024-11-27 21:51:37.163133] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:14.201 [2024-11-27 21:51:37.163148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:14.201 [2024-11-27 21:51:37.163760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:14.202 [2024-11-27 21:51:37.163767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:14.202 [2024-11-27 21:51:37.163775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:14.202 [2024-11-27 21:51:37.163783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:14.202 [2024-11-27 21:51:37.163791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:14.202 [2024-11-27 21:51:37.163798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:14.202 [2024-11-27 21:51:37.163807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:14.202 [2024-11-27 21:51:37.163814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:14.202 [2024-11-27 21:51:37.163823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:14.202 [2024-11-27 21:51:37.163830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:14.202 [2024-11-27 21:51:37.163842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:14.202 [2024-11-27 21:51:37.163861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:14.202 [2024-11-27 21:51:37.163870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:14.202 [2024-11-27 21:51:37.163877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:14.202 [2024-11-27 21:51:37.163886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:14.202 [2024-11-27 21:51:37.163894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:14.202 [2024-11-27 21:51:37.163902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:14.202 [2024-11-27 21:51:37.163910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:14.202 [2024-11-27 21:51:37.163918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:14.202 [2024-11-27 21:51:37.163925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:14.202 [2024-11-27 21:51:37.163934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:14.202 [2024-11-27 21:51:37.163941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:14.202 [2024-11-27 21:51:37.163950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:14.202 [2024-11-27 21:51:37.163957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:14.202 [2024-11-27 21:51:37.163965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:14.202 [2024-11-27 21:51:37.163972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:14.202 [2024-11-27 21:51:37.163983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:14.202 [2024-11-27 21:51:37.163990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:14.202 [2024-11-27 21:51:37.164006] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:14.202 [2024-11-27 21:51:37.164014] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 78ee2df8-80e5-4b2c-bba3-34ec78c869a1 00:19:14.202 [2024-11-27 21:51:37.164024] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:14.202 [2024-11-27 21:51:37.164032] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:14.202 [2024-11-27 21:51:37.164041] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:14.202 [2024-11-27 21:51:37.164048] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:14.202 [2024-11-27 21:51:37.164057] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:14.202 [2024-11-27 21:51:37.164065] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:14.202 [2024-11-27 21:51:37.164073] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:14.202 [2024-11-27 21:51:37.164079] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:14.202 [2024-11-27 21:51:37.164086] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:14.202 [2024-11-27 21:51:37.164093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.202 [2024-11-27 21:51:37.164102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:14.202 [2024-11-27 21:51:37.164110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.962 ms 00:19:14.202 [2024-11-27 21:51:37.164120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.202 [2024-11-27 21:51:37.165642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.202 [2024-11-27 21:51:37.165667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:14.202 [2024-11-27 21:51:37.165677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.483 ms 00:19:14.202 [2024-11-27 21:51:37.165685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.202 [2024-11-27 21:51:37.165803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.202 [2024-11-27 21:51:37.165814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:14.202 [2024-11-27 21:51:37.165822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:14.202 [2024-11-27 21:51:37.165830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.202 [2024-11-27 21:51:37.171070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.202 [2024-11-27 21:51:37.171103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:14.202 [2024-11-27 21:51:37.171114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.202 [2024-11-27 21:51:37.171124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.202 [2024-11-27 21:51:37.171197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.202 [2024-11-27 21:51:37.171208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:14.202 [2024-11-27 21:51:37.171216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.202 [2024-11-27 21:51:37.171227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.202 [2024-11-27 21:51:37.171287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.202 [2024-11-27 21:51:37.171298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:14.202 [2024-11-27 21:51:37.171306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.202 [2024-11-27 21:51:37.171315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.202 [2024-11-27 21:51:37.171354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.202 [2024-11-27 21:51:37.171364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:14.202 [2024-11-27 21:51:37.171372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.202 [2024-11-27 21:51:37.171380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.202 [2024-11-27 21:51:37.180617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.202 [2024-11-27 21:51:37.180656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:14.202 [2024-11-27 21:51:37.180666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.202 [2024-11-27 21:51:37.180676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.202 [2024-11-27 21:51:37.188371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.202 [2024-11-27 21:51:37.188409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:14.202 [2024-11-27 21:51:37.188429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.202 [2024-11-27 21:51:37.188440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.202 [2024-11-27 21:51:37.188532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.202 [2024-11-27 21:51:37.188545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:14.202 [2024-11-27 21:51:37.188553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.202 [2024-11-27 21:51:37.188562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.202 [2024-11-27 21:51:37.188619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.202 [2024-11-27 21:51:37.188638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:14.202 [2024-11-27 21:51:37.188646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.202 [2024-11-27 21:51:37.188654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.202 [2024-11-27 21:51:37.188736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.202 [2024-11-27 21:51:37.188762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:14.202 [2024-11-27 21:51:37.188772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.202 [2024-11-27 21:51:37.188781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.202 [2024-11-27 21:51:37.188828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.202 [2024-11-27 21:51:37.188847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:14.202 [2024-11-27 21:51:37.188854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.202 [2024-11-27 21:51:37.188864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.202 [2024-11-27 21:51:37.188918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.202 [2024-11-27 21:51:37.188938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:14.202 [2024-11-27 21:51:37.188947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.202 [2024-11-27 21:51:37.188956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.202 [2024-11-27 21:51:37.189013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.202 [2024-11-27 21:51:37.189029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:14.202 [2024-11-27 21:51:37.189037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.202 [2024-11-27 21:51:37.189047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.202 [2024-11-27 21:51:37.189233] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 49.236 ms, result 0 00:19:14.202 true 00:19:14.202 21:51:37 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 87141 00:19:14.202 21:51:37 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 87141 ']' 00:19:14.202 21:51:37 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 87141 00:19:14.202 21:51:37 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:14.202 21:51:37 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:14.202 21:51:37 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87141 00:19:14.202 21:51:37 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:14.203 21:51:37 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:14.203 killing process with pid 87141 00:19:14.203 21:51:37 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87141' 00:19:14.203 21:51:37 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 87141 00:19:14.203 21:51:37 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 87141 00:19:19.486 21:51:41 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:19:19.747 65536+0 records in 00:19:19.747 65536+0 records out 00:19:19.747 268435456 bytes (268 MB, 256 MiB) copied, 0.817574 s, 328 MB/s 00:19:19.747 21:51:42 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:20.008 [2024-11-27 21:51:42.879751] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:19:20.008 [2024-11-27 21:51:42.879871] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87301 ] 00:19:20.008 [2024-11-27 21:51:43.025048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.008 [2024-11-27 21:51:43.050277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.270 [2024-11-27 21:51:43.166497] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:20.270 [2024-11-27 21:51:43.166590] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:20.270 [2024-11-27 21:51:43.324864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.270 [2024-11-27 21:51:43.324925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:20.270 [2024-11-27 21:51:43.324940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:20.270 [2024-11-27 21:51:43.324955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.270 [2024-11-27 21:51:43.327522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.270 [2024-11-27 21:51:43.327576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:20.270 [2024-11-27 21:51:43.327587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.546 ms 00:19:20.270 [2024-11-27 21:51:43.327594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.270 [2024-11-27 21:51:43.327707] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:20.270 [2024-11-27 21:51:43.328092] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:20.270 [2024-11-27 21:51:43.328136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.270 [2024-11-27 21:51:43.328145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:20.270 [2024-11-27 21:51:43.328154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.444 ms 00:19:20.270 [2024-11-27 21:51:43.328162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.270 [2024-11-27 21:51:43.330241] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:20.270 [2024-11-27 21:51:43.333580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.270 [2024-11-27 21:51:43.333629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:20.270 [2024-11-27 21:51:43.333651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.341 ms 00:19:20.270 [2024-11-27 21:51:43.333661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.270 [2024-11-27 21:51:43.333785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.270 [2024-11-27 21:51:43.333798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:20.270 [2024-11-27 21:51:43.333807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:19:20.270 [2024-11-27 21:51:43.333816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.270 [2024-11-27 21:51:43.341659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.270 [2024-11-27 21:51:43.341712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:20.270 [2024-11-27 21:51:43.341723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.797 ms 00:19:20.270 [2024-11-27 21:51:43.341730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.270 [2024-11-27 21:51:43.341870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.270 [2024-11-27 21:51:43.341882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:20.270 [2024-11-27 21:51:43.341895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:19:20.270 [2024-11-27 21:51:43.341905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.270 [2024-11-27 21:51:43.341933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.270 [2024-11-27 21:51:43.341943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:20.270 [2024-11-27 21:51:43.341951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:20.270 [2024-11-27 21:51:43.341958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.270 [2024-11-27 21:51:43.341987] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:20.270 [2024-11-27 21:51:43.344027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.270 [2024-11-27 21:51:43.344064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:20.270 [2024-11-27 21:51:43.344074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.047 ms 00:19:20.270 [2024-11-27 21:51:43.344087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.270 [2024-11-27 21:51:43.344132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.270 [2024-11-27 21:51:43.344141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:20.270 [2024-11-27 21:51:43.344150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:20.270 [2024-11-27 21:51:43.344157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.270 [2024-11-27 21:51:43.344177] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:20.270 [2024-11-27 21:51:43.344197] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:20.270 [2024-11-27 21:51:43.344240] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:20.270 [2024-11-27 21:51:43.344262] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:20.270 [2024-11-27 21:51:43.344404] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:20.270 [2024-11-27 21:51:43.344419] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:20.270 [2024-11-27 21:51:43.344434] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:20.270 [2024-11-27 21:51:43.344445] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:20.270 [2024-11-27 21:51:43.344455] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:20.270 [2024-11-27 21:51:43.344464] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:20.270 [2024-11-27 21:51:43.344472] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:20.270 [2024-11-27 21:51:43.344479] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:20.270 [2024-11-27 21:51:43.344489] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:20.270 [2024-11-27 21:51:43.344502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.270 [2024-11-27 21:51:43.344510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:20.270 [2024-11-27 21:51:43.344517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:19:20.270 [2024-11-27 21:51:43.344524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.270 [2024-11-27 21:51:43.344614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.270 [2024-11-27 21:51:43.344624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:20.270 [2024-11-27 21:51:43.344636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:20.270 [2024-11-27 21:51:43.344643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.270 [2024-11-27 21:51:43.344743] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:20.270 [2024-11-27 21:51:43.344770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:20.270 [2024-11-27 21:51:43.344780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:20.270 [2024-11-27 21:51:43.344790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.270 [2024-11-27 21:51:43.344799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:20.270 [2024-11-27 21:51:43.344807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:20.270 [2024-11-27 21:51:43.344815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:20.270 [2024-11-27 21:51:43.344825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:20.270 [2024-11-27 21:51:43.344833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:20.270 [2024-11-27 21:51:43.344841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:20.270 [2024-11-27 21:51:43.344849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:20.270 [2024-11-27 21:51:43.344857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:20.270 [2024-11-27 21:51:43.344867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:20.270 [2024-11-27 21:51:43.344875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:20.270 [2024-11-27 21:51:43.344883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:20.270 [2024-11-27 21:51:43.344891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.270 [2024-11-27 21:51:43.344899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:20.270 [2024-11-27 21:51:43.344907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:20.270 [2024-11-27 21:51:43.344915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.270 [2024-11-27 21:51:43.344923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:20.270 [2024-11-27 21:51:43.344931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:20.270 [2024-11-27 21:51:43.344938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:20.270 [2024-11-27 21:51:43.344946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:20.270 [2024-11-27 21:51:43.344961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:20.270 [2024-11-27 21:51:43.344969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:20.270 [2024-11-27 21:51:43.344977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:20.270 [2024-11-27 21:51:43.344985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:20.270 [2024-11-27 21:51:43.344992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:20.270 [2024-11-27 21:51:43.345000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:20.270 [2024-11-27 21:51:43.345007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:20.270 [2024-11-27 21:51:43.345015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:20.270 [2024-11-27 21:51:43.345022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:20.271 [2024-11-27 21:51:43.345030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:20.271 [2024-11-27 21:51:43.345038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:20.271 [2024-11-27 21:51:43.345045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:20.271 [2024-11-27 21:51:43.345053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:20.271 [2024-11-27 21:51:43.345060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:20.271 [2024-11-27 21:51:43.345068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:20.271 [2024-11-27 21:51:43.345076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:20.271 [2024-11-27 21:51:43.345085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.271 [2024-11-27 21:51:43.345094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:20.271 [2024-11-27 21:51:43.345101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:20.271 [2024-11-27 21:51:43.345108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.271 [2024-11-27 21:51:43.345115] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:20.271 [2024-11-27 21:51:43.345127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:20.271 [2024-11-27 21:51:43.345136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:20.271 [2024-11-27 21:51:43.345145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.271 [2024-11-27 21:51:43.345154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:20.271 [2024-11-27 21:51:43.345162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:20.271 [2024-11-27 21:51:43.345170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:20.271 [2024-11-27 21:51:43.345177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:20.271 [2024-11-27 21:51:43.345183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:20.271 [2024-11-27 21:51:43.345190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:20.271 [2024-11-27 21:51:43.345198] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:20.271 [2024-11-27 21:51:43.345211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:20.271 [2024-11-27 21:51:43.345225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:20.271 [2024-11-27 21:51:43.345232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:20.271 [2024-11-27 21:51:43.345240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:20.271 [2024-11-27 21:51:43.345247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:20.271 [2024-11-27 21:51:43.345254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:20.271 [2024-11-27 21:51:43.345261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:20.271 [2024-11-27 21:51:43.345268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:20.271 [2024-11-27 21:51:43.345281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:20.271 [2024-11-27 21:51:43.345289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:20.271 [2024-11-27 21:51:43.345296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:20.271 [2024-11-27 21:51:43.345303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:20.271 [2024-11-27 21:51:43.345310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:20.271 [2024-11-27 21:51:43.345317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:20.271 [2024-11-27 21:51:43.345325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:20.271 [2024-11-27 21:51:43.345352] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:20.271 [2024-11-27 21:51:43.345364] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:20.271 [2024-11-27 21:51:43.345378] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:20.271 [2024-11-27 21:51:43.345385] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:20.271 [2024-11-27 21:51:43.345393] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:20.271 [2024-11-27 21:51:43.345401] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:20.271 [2024-11-27 21:51:43.345409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.271 [2024-11-27 21:51:43.345417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:20.271 [2024-11-27 21:51:43.345426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.735 ms 00:19:20.271 [2024-11-27 21:51:43.345434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.271 [2024-11-27 21:51:43.359129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.271 [2024-11-27 21:51:43.359175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:20.271 [2024-11-27 21:51:43.359187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.639 ms 00:19:20.271 [2024-11-27 21:51:43.359196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.271 [2024-11-27 21:51:43.359327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.271 [2024-11-27 21:51:43.359361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:20.271 [2024-11-27 21:51:43.359371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:20.271 [2024-11-27 21:51:43.359380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.271 [2024-11-27 21:51:43.380080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.271 [2024-11-27 21:51:43.380136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:20.271 [2024-11-27 21:51:43.380148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.675 ms 00:19:20.271 [2024-11-27 21:51:43.380157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.271 [2024-11-27 21:51:43.380256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.271 [2024-11-27 21:51:43.380275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:20.271 [2024-11-27 21:51:43.380289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:20.271 [2024-11-27 21:51:43.380298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.271 [2024-11-27 21:51:43.380829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.271 [2024-11-27 21:51:43.380869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:20.271 [2024-11-27 21:51:43.380882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.506 ms 00:19:20.271 [2024-11-27 21:51:43.380893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.271 [2024-11-27 21:51:43.381061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.271 [2024-11-27 21:51:43.381076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:20.271 [2024-11-27 21:51:43.381086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:19:20.271 [2024-11-27 21:51:43.381095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.533 [2024-11-27 21:51:43.389556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.533 [2024-11-27 21:51:43.389599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:20.533 [2024-11-27 21:51:43.389616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.438 ms 00:19:20.533 [2024-11-27 21:51:43.389624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.533 [2024-11-27 21:51:43.393378] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:20.533 [2024-11-27 21:51:43.393427] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:20.533 [2024-11-27 21:51:43.393440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.533 [2024-11-27 21:51:43.393448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:20.533 [2024-11-27 21:51:43.393457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.697 ms 00:19:20.533 [2024-11-27 21:51:43.393464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.533 [2024-11-27 21:51:43.409687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.533 [2024-11-27 21:51:43.409719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:20.533 [2024-11-27 21:51:43.409730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.165 ms 00:19:20.533 [2024-11-27 21:51:43.409737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.533 [2024-11-27 21:51:43.411853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.533 [2024-11-27 21:51:43.411882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:20.533 [2024-11-27 21:51:43.411891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.051 ms 00:19:20.533 [2024-11-27 21:51:43.411898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.533 [2024-11-27 21:51:43.413647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.533 [2024-11-27 21:51:43.413688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:20.533 [2024-11-27 21:51:43.413696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.708 ms 00:19:20.533 [2024-11-27 21:51:43.413703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.533 [2024-11-27 21:51:43.414014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.533 [2024-11-27 21:51:43.414040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:20.533 [2024-11-27 21:51:43.414052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:19:20.533 [2024-11-27 21:51:43.414059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.533 [2024-11-27 21:51:43.429627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.533 [2024-11-27 21:51:43.429671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:20.533 [2024-11-27 21:51:43.429696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.534 ms 00:19:20.533 [2024-11-27 21:51:43.429704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.533 [2024-11-27 21:51:43.437123] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:20.533 [2024-11-27 21:51:43.451141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.533 [2024-11-27 21:51:43.451179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:20.533 [2024-11-27 21:51:43.451198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.383 ms 00:19:20.533 [2024-11-27 21:51:43.451206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.533 [2024-11-27 21:51:43.451272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.533 [2024-11-27 21:51:43.451287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:20.533 [2024-11-27 21:51:43.451299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:20.533 [2024-11-27 21:51:43.451309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.533 [2024-11-27 21:51:43.451370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.533 [2024-11-27 21:51:43.451380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:20.533 [2024-11-27 21:51:43.451387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:19:20.533 [2024-11-27 21:51:43.451395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.533 [2024-11-27 21:51:43.451424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.533 [2024-11-27 21:51:43.451432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:20.533 [2024-11-27 21:51:43.451440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:20.533 [2024-11-27 21:51:43.451447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.533 [2024-11-27 21:51:43.451478] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:20.533 [2024-11-27 21:51:43.451487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.533 [2024-11-27 21:51:43.451494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:20.533 [2024-11-27 21:51:43.451505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:20.533 [2024-11-27 21:51:43.451517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.533 [2024-11-27 21:51:43.455375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.533 [2024-11-27 21:51:43.455409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:20.533 [2024-11-27 21:51:43.455419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.837 ms 00:19:20.533 [2024-11-27 21:51:43.455426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.533 [2024-11-27 21:51:43.455508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.533 [2024-11-27 21:51:43.455519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:20.533 [2024-11-27 21:51:43.455527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:20.533 [2024-11-27 21:51:43.455535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.534 [2024-11-27 21:51:43.456363] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:20.534 [2024-11-27 21:51:43.457351] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 131.231 ms, result 0 00:19:20.534 [2024-11-27 21:51:43.458352] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:20.534 [2024-11-27 21:51:43.467870] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:21.480  [2024-11-27T21:51:45.546Z] Copying: 15/256 [MB] (15 MBps) [2024-11-27T21:51:46.496Z] Copying: 28/256 [MB] (13 MBps) [2024-11-27T21:51:47.868Z] Copying: 53/256 [MB] (25 MBps) [2024-11-27T21:51:48.802Z] Copying: 93/256 [MB] (39 MBps) [2024-11-27T21:51:49.750Z] Copying: 131/256 [MB] (38 MBps) [2024-11-27T21:51:50.692Z] Copying: 170/256 [MB] (39 MBps) [2024-11-27T21:51:51.632Z] Copying: 190/256 [MB] (19 MBps) [2024-11-27T21:51:52.572Z] Copying: 207/256 [MB] (17 MBps) [2024-11-27T21:51:53.513Z] Copying: 238/256 [MB] (30 MBps) [2024-11-27T21:51:53.775Z] Copying: 253/256 [MB] (14 MBps) [2024-11-27T21:51:53.775Z] Copying: 256/256 [MB] (average 25 MBps)[2024-11-27 21:51:53.531298] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:30.654 [2024-11-27 21:51:53.532268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.654 [2024-11-27 21:51:53.532301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:30.654 [2024-11-27 21:51:53.532311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:30.654 [2024-11-27 21:51:53.532317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.654 [2024-11-27 21:51:53.532333] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:30.654 [2024-11-27 21:51:53.532717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.654 [2024-11-27 21:51:53.532737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:30.655 [2024-11-27 21:51:53.532749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.354 ms 00:19:30.655 [2024-11-27 21:51:53.532758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.655 [2024-11-27 21:51:53.534164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.655 [2024-11-27 21:51:53.534199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:30.655 [2024-11-27 21:51:53.534207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.390 ms 00:19:30.655 [2024-11-27 21:51:53.534215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.655 [2024-11-27 21:51:53.539141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.655 [2024-11-27 21:51:53.539170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:30.655 [2024-11-27 21:51:53.539177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.912 ms 00:19:30.655 [2024-11-27 21:51:53.539184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.655 [2024-11-27 21:51:53.544541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.655 [2024-11-27 21:51:53.544571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:30.655 [2024-11-27 21:51:53.544578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.331 ms 00:19:30.655 [2024-11-27 21:51:53.544584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.655 [2024-11-27 21:51:53.545549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.655 [2024-11-27 21:51:53.545579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:30.655 [2024-11-27 21:51:53.545587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.924 ms 00:19:30.655 [2024-11-27 21:51:53.545592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.655 [2024-11-27 21:51:53.549122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.655 [2024-11-27 21:51:53.549161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:30.655 [2024-11-27 21:51:53.549169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.505 ms 00:19:30.655 [2024-11-27 21:51:53.549174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.655 [2024-11-27 21:51:53.549265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.655 [2024-11-27 21:51:53.549272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:30.655 [2024-11-27 21:51:53.549278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:30.655 [2024-11-27 21:51:53.549287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.655 [2024-11-27 21:51:53.551288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.655 [2024-11-27 21:51:53.551318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:30.655 [2024-11-27 21:51:53.551324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.989 ms 00:19:30.655 [2024-11-27 21:51:53.551329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.655 [2024-11-27 21:51:53.552587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.655 [2024-11-27 21:51:53.552614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:30.655 [2024-11-27 21:51:53.552620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.224 ms 00:19:30.655 [2024-11-27 21:51:53.552625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.655 [2024-11-27 21:51:53.553534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.655 [2024-11-27 21:51:53.553561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:30.655 [2024-11-27 21:51:53.553568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.885 ms 00:19:30.655 [2024-11-27 21:51:53.553574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.655 [2024-11-27 21:51:53.554633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.655 [2024-11-27 21:51:53.554661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:30.655 [2024-11-27 21:51:53.554667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.014 ms 00:19:30.655 [2024-11-27 21:51:53.554673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.655 [2024-11-27 21:51:53.554705] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:30.655 [2024-11-27 21:51:53.554717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:30.655 [2024-11-27 21:51:53.554724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:30.655 [2024-11-27 21:51:53.554731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:30.655 [2024-11-27 21:51:53.554737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:30.655 [2024-11-27 21:51:53.554742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:30.655 [2024-11-27 21:51:53.554748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:30.655 [2024-11-27 21:51:53.554754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:30.655 [2024-11-27 21:51:53.554759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:30.655 [2024-11-27 21:51:53.554765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:30.655 [2024-11-27 21:51:53.554771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:30.655 [2024-11-27 21:51:53.554777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:30.655 [2024-11-27 21:51:53.554782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:30.655 [2024-11-27 21:51:53.554788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:30.655 [2024-11-27 21:51:53.554794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:30.655 [2024-11-27 21:51:53.554799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:30.655 [2024-11-27 21:51:53.554805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:30.655 [2024-11-27 21:51:53.554811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:30.655 [2024-11-27 21:51:53.554816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:30.655 [2024-11-27 21:51:53.554822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:30.655 [2024-11-27 21:51:53.554827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:30.655 [2024-11-27 21:51:53.554833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:30.655 [2024-11-27 21:51:53.554839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:30.655 [2024-11-27 21:51:53.554844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:30.655 [2024-11-27 21:51:53.554850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:30.655 [2024-11-27 21:51:53.554855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:30.655 [2024-11-27 21:51:53.554861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:30.655 [2024-11-27 21:51:53.554866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:30.655 [2024-11-27 21:51:53.554872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:30.655 [2024-11-27 21:51:53.554878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:30.655 [2024-11-27 21:51:53.554884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:30.655 [2024-11-27 21:51:53.554898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.554904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.554910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.554916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.554922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.554928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.554934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.554940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.554945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.554951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.554957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.554962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.554968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.554974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.554980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.554986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.554991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.554997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:30.656 [2024-11-27 21:51:53.555305] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:30.656 [2024-11-27 21:51:53.555311] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 78ee2df8-80e5-4b2c-bba3-34ec78c869a1 00:19:30.656 [2024-11-27 21:51:53.555317] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:30.656 [2024-11-27 21:51:53.555323] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:30.656 [2024-11-27 21:51:53.555328] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:30.656 [2024-11-27 21:51:53.555343] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:30.656 [2024-11-27 21:51:53.555352] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:30.656 [2024-11-27 21:51:53.555358] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:30.656 [2024-11-27 21:51:53.555364] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:30.656 [2024-11-27 21:51:53.555369] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:30.656 [2024-11-27 21:51:53.555374] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:30.656 [2024-11-27 21:51:53.555379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.656 [2024-11-27 21:51:53.555387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:30.656 [2024-11-27 21:51:53.555394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.675 ms 00:19:30.656 [2024-11-27 21:51:53.555399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.656 [2024-11-27 21:51:53.556625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.656 [2024-11-27 21:51:53.556646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:30.656 [2024-11-27 21:51:53.556653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.213 ms 00:19:30.656 [2024-11-27 21:51:53.556659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.656 [2024-11-27 21:51:53.556730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.656 [2024-11-27 21:51:53.556736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:30.656 [2024-11-27 21:51:53.556742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:30.656 [2024-11-27 21:51:53.556747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.656 [2024-11-27 21:51:53.561045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.657 [2024-11-27 21:51:53.561076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:30.657 [2024-11-27 21:51:53.561084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.657 [2024-11-27 21:51:53.561089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.657 [2024-11-27 21:51:53.561134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.657 [2024-11-27 21:51:53.561140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:30.657 [2024-11-27 21:51:53.561146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.657 [2024-11-27 21:51:53.561151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.657 [2024-11-27 21:51:53.561180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.657 [2024-11-27 21:51:53.561190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:30.657 [2024-11-27 21:51:53.561195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.657 [2024-11-27 21:51:53.561200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.657 [2024-11-27 21:51:53.561215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.657 [2024-11-27 21:51:53.561221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:30.657 [2024-11-27 21:51:53.561227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.657 [2024-11-27 21:51:53.561232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.657 [2024-11-27 21:51:53.568908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.657 [2024-11-27 21:51:53.568946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:30.657 [2024-11-27 21:51:53.568953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.657 [2024-11-27 21:51:53.568959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.657 [2024-11-27 21:51:53.575112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.657 [2024-11-27 21:51:53.575152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:30.657 [2024-11-27 21:51:53.575160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.657 [2024-11-27 21:51:53.575166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.657 [2024-11-27 21:51:53.575188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.657 [2024-11-27 21:51:53.575194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:30.657 [2024-11-27 21:51:53.575200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.657 [2024-11-27 21:51:53.575207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.657 [2024-11-27 21:51:53.575229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.657 [2024-11-27 21:51:53.575236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:30.657 [2024-11-27 21:51:53.575247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.657 [2024-11-27 21:51:53.575252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.657 [2024-11-27 21:51:53.575302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.657 [2024-11-27 21:51:53.575310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:30.657 [2024-11-27 21:51:53.575316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.657 [2024-11-27 21:51:53.575327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.657 [2024-11-27 21:51:53.575361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.657 [2024-11-27 21:51:53.575372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:30.657 [2024-11-27 21:51:53.575380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.657 [2024-11-27 21:51:53.575386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.657 [2024-11-27 21:51:53.575415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.657 [2024-11-27 21:51:53.575422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:30.657 [2024-11-27 21:51:53.575428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.657 [2024-11-27 21:51:53.575434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.657 [2024-11-27 21:51:53.575469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:30.657 [2024-11-27 21:51:53.575477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:30.657 [2024-11-27 21:51:53.575485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:30.657 [2024-11-27 21:51:53.575490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.657 [2024-11-27 21:51:53.575593] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 43.306 ms, result 0 00:19:30.918 00:19:30.918 00:19:30.918 21:51:53 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=87420 00:19:30.918 21:51:53 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:30.918 21:51:53 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 87420 00:19:30.918 21:51:53 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 87420 ']' 00:19:30.918 21:51:53 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.918 21:51:53 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:30.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.918 21:51:53 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.918 21:51:53 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:30.918 21:51:53 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:31.177 [2024-11-27 21:51:54.052824] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:19:31.177 [2024-11-27 21:51:54.052949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87420 ] 00:19:31.177 [2024-11-27 21:51:54.193438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.177 [2024-11-27 21:51:54.210600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.112 21:51:54 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:32.112 21:51:54 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:32.112 21:51:54 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:32.112 [2024-11-27 21:51:55.092292] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:32.112 [2024-11-27 21:51:55.092359] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:32.371 [2024-11-27 21:51:55.254768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.371 [2024-11-27 21:51:55.254806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:32.371 [2024-11-27 21:51:55.254817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:32.371 [2024-11-27 21:51:55.254825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.371 [2024-11-27 21:51:55.256582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.371 [2024-11-27 21:51:55.256613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:32.371 [2024-11-27 21:51:55.256620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.739 ms 00:19:32.371 [2024-11-27 21:51:55.256627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.371 [2024-11-27 21:51:55.256680] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:32.371 [2024-11-27 21:51:55.256854] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:32.371 [2024-11-27 21:51:55.256873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.371 [2024-11-27 21:51:55.256881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:32.371 [2024-11-27 21:51:55.256888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:19:32.371 [2024-11-27 21:51:55.256894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.371 [2024-11-27 21:51:55.257981] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:32.371 [2024-11-27 21:51:55.259901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.371 [2024-11-27 21:51:55.259929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:32.371 [2024-11-27 21:51:55.259939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.919 ms 00:19:32.371 [2024-11-27 21:51:55.259945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.371 [2024-11-27 21:51:55.259991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.371 [2024-11-27 21:51:55.260000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:32.371 [2024-11-27 21:51:55.260009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:32.371 [2024-11-27 21:51:55.260018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.371 [2024-11-27 21:51:55.264406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.371 [2024-11-27 21:51:55.264432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:32.371 [2024-11-27 21:51:55.264441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.348 ms 00:19:32.371 [2024-11-27 21:51:55.264448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.371 [2024-11-27 21:51:55.264531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.371 [2024-11-27 21:51:55.264539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:32.371 [2024-11-27 21:51:55.264547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:19:32.371 [2024-11-27 21:51:55.264555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.371 [2024-11-27 21:51:55.264578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.371 [2024-11-27 21:51:55.264584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:32.371 [2024-11-27 21:51:55.264594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:32.371 [2024-11-27 21:51:55.264600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.371 [2024-11-27 21:51:55.264618] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:32.371 [2024-11-27 21:51:55.265764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.371 [2024-11-27 21:51:55.265794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:32.371 [2024-11-27 21:51:55.265803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.151 ms 00:19:32.371 [2024-11-27 21:51:55.265810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.371 [2024-11-27 21:51:55.265835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.371 [2024-11-27 21:51:55.265844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:32.371 [2024-11-27 21:51:55.265850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:32.371 [2024-11-27 21:51:55.265857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.371 [2024-11-27 21:51:55.265871] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:32.371 [2024-11-27 21:51:55.265887] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:32.371 [2024-11-27 21:51:55.265916] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:32.371 [2024-11-27 21:51:55.265937] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:32.371 [2024-11-27 21:51:55.266015] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:32.371 [2024-11-27 21:51:55.266029] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:32.371 [2024-11-27 21:51:55.266037] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:32.371 [2024-11-27 21:51:55.266051] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:32.371 [2024-11-27 21:51:55.266058] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:32.371 [2024-11-27 21:51:55.266066] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:32.372 [2024-11-27 21:51:55.266072] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:32.372 [2024-11-27 21:51:55.266079] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:32.372 [2024-11-27 21:51:55.266085] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:32.372 [2024-11-27 21:51:55.266093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.372 [2024-11-27 21:51:55.266098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:32.372 [2024-11-27 21:51:55.266105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:19:32.372 [2024-11-27 21:51:55.266111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.372 [2024-11-27 21:51:55.266181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.372 [2024-11-27 21:51:55.266191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:32.372 [2024-11-27 21:51:55.266198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:19:32.372 [2024-11-27 21:51:55.266204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.372 [2024-11-27 21:51:55.266285] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:32.372 [2024-11-27 21:51:55.266296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:32.372 [2024-11-27 21:51:55.266304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:32.372 [2024-11-27 21:51:55.266310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:32.372 [2024-11-27 21:51:55.266319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:32.372 [2024-11-27 21:51:55.266324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:32.372 [2024-11-27 21:51:55.266330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:32.372 [2024-11-27 21:51:55.266347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:32.372 [2024-11-27 21:51:55.266356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:32.372 [2024-11-27 21:51:55.266361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:32.372 [2024-11-27 21:51:55.266368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:32.372 [2024-11-27 21:51:55.266373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:32.372 [2024-11-27 21:51:55.266380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:32.372 [2024-11-27 21:51:55.266385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:32.372 [2024-11-27 21:51:55.266392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:32.372 [2024-11-27 21:51:55.266396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:32.372 [2024-11-27 21:51:55.266403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:32.372 [2024-11-27 21:51:55.266408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:32.372 [2024-11-27 21:51:55.266414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:32.372 [2024-11-27 21:51:55.266420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:32.372 [2024-11-27 21:51:55.266428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:32.372 [2024-11-27 21:51:55.266434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:32.372 [2024-11-27 21:51:55.266441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:32.372 [2024-11-27 21:51:55.266447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:32.372 [2024-11-27 21:51:55.266454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:32.372 [2024-11-27 21:51:55.266460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:32.372 [2024-11-27 21:51:55.266467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:32.372 [2024-11-27 21:51:55.266472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:32.372 [2024-11-27 21:51:55.266480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:32.372 [2024-11-27 21:51:55.266486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:32.372 [2024-11-27 21:51:55.266492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:32.372 [2024-11-27 21:51:55.266498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:32.372 [2024-11-27 21:51:55.266505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:32.372 [2024-11-27 21:51:55.266510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:32.372 [2024-11-27 21:51:55.266517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:32.372 [2024-11-27 21:51:55.266523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:32.372 [2024-11-27 21:51:55.266532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:32.372 [2024-11-27 21:51:55.266537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:32.372 [2024-11-27 21:51:55.266544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:32.372 [2024-11-27 21:51:55.266549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:32.372 [2024-11-27 21:51:55.266557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:32.372 [2024-11-27 21:51:55.266562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:32.372 [2024-11-27 21:51:55.266569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:32.372 [2024-11-27 21:51:55.266575] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:32.372 [2024-11-27 21:51:55.266584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:32.372 [2024-11-27 21:51:55.266590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:32.372 [2024-11-27 21:51:55.266598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:32.372 [2024-11-27 21:51:55.266605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:32.372 [2024-11-27 21:51:55.266612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:32.372 [2024-11-27 21:51:55.266617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:32.372 [2024-11-27 21:51:55.266624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:32.372 [2024-11-27 21:51:55.266630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:32.372 [2024-11-27 21:51:55.266638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:32.372 [2024-11-27 21:51:55.266645] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:32.372 [2024-11-27 21:51:55.266654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:32.372 [2024-11-27 21:51:55.266662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:32.372 [2024-11-27 21:51:55.266670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:32.372 [2024-11-27 21:51:55.266677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:32.372 [2024-11-27 21:51:55.266684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:32.372 [2024-11-27 21:51:55.266690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:32.372 [2024-11-27 21:51:55.266699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:32.372 [2024-11-27 21:51:55.266705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:32.372 [2024-11-27 21:51:55.266712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:32.372 [2024-11-27 21:51:55.266719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:32.373 [2024-11-27 21:51:55.266726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:32.373 [2024-11-27 21:51:55.266732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:32.373 [2024-11-27 21:51:55.266744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:32.373 [2024-11-27 21:51:55.266751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:32.373 [2024-11-27 21:51:55.266759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:32.373 [2024-11-27 21:51:55.266766] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:32.373 [2024-11-27 21:51:55.266776] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:32.373 [2024-11-27 21:51:55.266782] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:32.373 [2024-11-27 21:51:55.266791] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:32.373 [2024-11-27 21:51:55.266797] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:32.373 [2024-11-27 21:51:55.266805] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:32.373 [2024-11-27 21:51:55.266810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.373 [2024-11-27 21:51:55.266819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:32.373 [2024-11-27 21:51:55.266824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.581 ms 00:19:32.373 [2024-11-27 21:51:55.266831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.373 [2024-11-27 21:51:55.274690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.373 [2024-11-27 21:51:55.274719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:32.373 [2024-11-27 21:51:55.274727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.818 ms 00:19:32.373 [2024-11-27 21:51:55.274734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.373 [2024-11-27 21:51:55.274825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.373 [2024-11-27 21:51:55.274842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:32.373 [2024-11-27 21:51:55.274848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:19:32.373 [2024-11-27 21:51:55.274855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.373 [2024-11-27 21:51:55.282231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.373 [2024-11-27 21:51:55.282260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:32.373 [2024-11-27 21:51:55.282267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.358 ms 00:19:32.373 [2024-11-27 21:51:55.282277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.373 [2024-11-27 21:51:55.282312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.373 [2024-11-27 21:51:55.282320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:32.373 [2024-11-27 21:51:55.282326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:19:32.373 [2024-11-27 21:51:55.282344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.373 [2024-11-27 21:51:55.282634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.373 [2024-11-27 21:51:55.282656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:32.373 [2024-11-27 21:51:55.282663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:19:32.373 [2024-11-27 21:51:55.282671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.373 [2024-11-27 21:51:55.282766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.373 [2024-11-27 21:51:55.282780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:32.373 [2024-11-27 21:51:55.282786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:19:32.373 [2024-11-27 21:51:55.282797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.373 [2024-11-27 21:51:55.287640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.373 [2024-11-27 21:51:55.287669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:32.373 [2024-11-27 21:51:55.287676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.827 ms 00:19:32.373 [2024-11-27 21:51:55.287683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.373 [2024-11-27 21:51:55.299645] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:32.373 [2024-11-27 21:51:55.299705] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:32.373 [2024-11-27 21:51:55.299724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.373 [2024-11-27 21:51:55.299739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:32.373 [2024-11-27 21:51:55.299753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.960 ms 00:19:32.373 [2024-11-27 21:51:55.299767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.373 [2024-11-27 21:51:55.315663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.373 [2024-11-27 21:51:55.315697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:32.373 [2024-11-27 21:51:55.315706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.837 ms 00:19:32.373 [2024-11-27 21:51:55.315718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.373 [2024-11-27 21:51:55.317296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.373 [2024-11-27 21:51:55.317329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:32.373 [2024-11-27 21:51:55.317346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.519 ms 00:19:32.373 [2024-11-27 21:51:55.317354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.373 [2024-11-27 21:51:55.318569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.373 [2024-11-27 21:51:55.318598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:32.373 [2024-11-27 21:51:55.318604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.185 ms 00:19:32.373 [2024-11-27 21:51:55.318612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.373 [2024-11-27 21:51:55.318858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.373 [2024-11-27 21:51:55.318877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:32.373 [2024-11-27 21:51:55.318884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:19:32.373 [2024-11-27 21:51:55.318891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.373 [2024-11-27 21:51:55.332906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.373 [2024-11-27 21:51:55.332946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:32.373 [2024-11-27 21:51:55.332955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.997 ms 00:19:32.373 [2024-11-27 21:51:55.332964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.373 [2024-11-27 21:51:55.338737] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:32.373 [2024-11-27 21:51:55.350145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.373 [2024-11-27 21:51:55.350175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:32.373 [2024-11-27 21:51:55.350186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.131 ms 00:19:32.373 [2024-11-27 21:51:55.350191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.373 [2024-11-27 21:51:55.350263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.373 [2024-11-27 21:51:55.350272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:32.374 [2024-11-27 21:51:55.350280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:32.374 [2024-11-27 21:51:55.350289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.374 [2024-11-27 21:51:55.350331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.374 [2024-11-27 21:51:55.350349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:32.374 [2024-11-27 21:51:55.350356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:19:32.374 [2024-11-27 21:51:55.350362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.374 [2024-11-27 21:51:55.350383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.374 [2024-11-27 21:51:55.350389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:32.374 [2024-11-27 21:51:55.350401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:32.374 [2024-11-27 21:51:55.350406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.374 [2024-11-27 21:51:55.350432] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:32.374 [2024-11-27 21:51:55.350439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.374 [2024-11-27 21:51:55.350446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:32.374 [2024-11-27 21:51:55.350451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:32.374 [2024-11-27 21:51:55.350458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.374 [2024-11-27 21:51:55.353906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.374 [2024-11-27 21:51:55.353939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:32.374 [2024-11-27 21:51:55.353946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.432 ms 00:19:32.374 [2024-11-27 21:51:55.353956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.374 [2024-11-27 21:51:55.354013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.374 [2024-11-27 21:51:55.354022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:32.374 [2024-11-27 21:51:55.354028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:32.374 [2024-11-27 21:51:55.354035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.374 [2024-11-27 21:51:55.354790] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:32.374 [2024-11-27 21:51:55.355618] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 99.800 ms, result 0 00:19:32.374 [2024-11-27 21:51:55.356615] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:32.374 Some configs were skipped because the RPC state that can call them passed over. 00:19:32.374 21:51:55 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:32.632 [2024-11-27 21:51:55.567992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.632 [2024-11-27 21:51:55.568024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:32.632 [2024-11-27 21:51:55.568035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.257 ms 00:19:32.632 [2024-11-27 21:51:55.568041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.632 [2024-11-27 21:51:55.568071] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.338 ms, result 0 00:19:32.632 true 00:19:32.632 21:51:55 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:32.892 [2024-11-27 21:51:55.767828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.892 [2024-11-27 21:51:55.767864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:32.892 [2024-11-27 21:51:55.767872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.930 ms 00:19:32.892 [2024-11-27 21:51:55.767879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.892 [2024-11-27 21:51:55.767905] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.005 ms, result 0 00:19:32.892 true 00:19:32.892 21:51:55 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 87420 00:19:32.892 21:51:55 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 87420 ']' 00:19:32.892 21:51:55 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 87420 00:19:32.892 21:51:55 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:32.892 21:51:55 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:32.892 21:51:55 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87420 00:19:32.892 21:51:55 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:32.892 21:51:55 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:32.892 killing process with pid 87420 00:19:32.892 21:51:55 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87420' 00:19:32.892 21:51:55 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 87420 00:19:32.892 21:51:55 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 87420 00:19:32.892 [2024-11-27 21:51:55.901660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.892 [2024-11-27 21:51:55.901706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:32.892 [2024-11-27 21:51:55.901716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:32.892 [2024-11-27 21:51:55.901723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.892 [2024-11-27 21:51:55.901745] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:32.892 [2024-11-27 21:51:55.902150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.892 [2024-11-27 21:51:55.902178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:32.892 [2024-11-27 21:51:55.902186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.394 ms 00:19:32.892 [2024-11-27 21:51:55.902193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.892 [2024-11-27 21:51:55.902415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.892 [2024-11-27 21:51:55.902430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:32.892 [2024-11-27 21:51:55.902438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:19:32.892 [2024-11-27 21:51:55.902446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.892 [2024-11-27 21:51:55.906086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.892 [2024-11-27 21:51:55.906118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:32.892 [2024-11-27 21:51:55.906126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.626 ms 00:19:32.892 [2024-11-27 21:51:55.906136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.892 [2024-11-27 21:51:55.911349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.892 [2024-11-27 21:51:55.911379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:32.892 [2024-11-27 21:51:55.911388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.186 ms 00:19:32.892 [2024-11-27 21:51:55.911396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.892 [2024-11-27 21:51:55.913788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.892 [2024-11-27 21:51:55.913832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:32.892 [2024-11-27 21:51:55.913841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.336 ms 00:19:32.892 [2024-11-27 21:51:55.913848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.892 [2024-11-27 21:51:55.917424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.892 [2024-11-27 21:51:55.917456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:32.892 [2024-11-27 21:51:55.917466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.541 ms 00:19:32.892 [2024-11-27 21:51:55.917473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.892 [2024-11-27 21:51:55.917569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.892 [2024-11-27 21:51:55.917579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:32.892 [2024-11-27 21:51:55.917590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:32.892 [2024-11-27 21:51:55.917597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.892 [2024-11-27 21:51:55.920089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.892 [2024-11-27 21:51:55.920122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:32.892 [2024-11-27 21:51:55.920129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.479 ms 00:19:32.892 [2024-11-27 21:51:55.920140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.892 [2024-11-27 21:51:55.921825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.892 [2024-11-27 21:51:55.921857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:32.892 [2024-11-27 21:51:55.921864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.657 ms 00:19:32.892 [2024-11-27 21:51:55.921870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.892 [2024-11-27 21:51:55.923390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.892 [2024-11-27 21:51:55.923421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:32.892 [2024-11-27 21:51:55.923428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.492 ms 00:19:32.892 [2024-11-27 21:51:55.923435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.892 [2024-11-27 21:51:55.924960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.892 [2024-11-27 21:51:55.924992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:32.892 [2024-11-27 21:51:55.924998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.469 ms 00:19:32.892 [2024-11-27 21:51:55.925005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.892 [2024-11-27 21:51:55.925032] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:32.892 [2024-11-27 21:51:55.925044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:32.892 [2024-11-27 21:51:55.925051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:32.893 [2024-11-27 21:51:55.925528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:32.894 [2024-11-27 21:51:55.925534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:32.894 [2024-11-27 21:51:55.925541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:32.894 [2024-11-27 21:51:55.925546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:32.894 [2024-11-27 21:51:55.925553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:32.894 [2024-11-27 21:51:55.925559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:32.894 [2024-11-27 21:51:55.925566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:32.894 [2024-11-27 21:51:55.925571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:32.894 [2024-11-27 21:51:55.925579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:32.894 [2024-11-27 21:51:55.925584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:32.894 [2024-11-27 21:51:55.925591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:32.894 [2024-11-27 21:51:55.925596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:32.894 [2024-11-27 21:51:55.925605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:32.894 [2024-11-27 21:51:55.925611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:32.894 [2024-11-27 21:51:55.925618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:32.894 [2024-11-27 21:51:55.925623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:32.894 [2024-11-27 21:51:55.925630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:32.894 [2024-11-27 21:51:55.925636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:32.894 [2024-11-27 21:51:55.925644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:32.894 [2024-11-27 21:51:55.925658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:32.894 [2024-11-27 21:51:55.925665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:32.894 [2024-11-27 21:51:55.925671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:32.894 [2024-11-27 21:51:55.925677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:32.894 [2024-11-27 21:51:55.925683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:32.894 [2024-11-27 21:51:55.925690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:32.894 [2024-11-27 21:51:55.925695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:32.894 [2024-11-27 21:51:55.925703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:32.894 [2024-11-27 21:51:55.925709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:32.894 [2024-11-27 21:51:55.925718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:32.894 [2024-11-27 21:51:55.925723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:32.894 [2024-11-27 21:51:55.925736] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:32.894 [2024-11-27 21:51:55.925742] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 78ee2df8-80e5-4b2c-bba3-34ec78c869a1 00:19:32.894 [2024-11-27 21:51:55.925751] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:32.894 [2024-11-27 21:51:55.925756] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:32.894 [2024-11-27 21:51:55.925763] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:32.894 [2024-11-27 21:51:55.925769] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:32.894 [2024-11-27 21:51:55.925775] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:32.894 [2024-11-27 21:51:55.925786] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:32.894 [2024-11-27 21:51:55.925793] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:32.894 [2024-11-27 21:51:55.925797] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:32.894 [2024-11-27 21:51:55.925804] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:32.894 [2024-11-27 21:51:55.925810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.894 [2024-11-27 21:51:55.925816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:32.894 [2024-11-27 21:51:55.925823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.779 ms 00:19:32.894 [2024-11-27 21:51:55.925833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.894 [2024-11-27 21:51:55.927079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.894 [2024-11-27 21:51:55.927104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:32.894 [2024-11-27 21:51:55.927111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.229 ms 00:19:32.894 [2024-11-27 21:51:55.927118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.894 [2024-11-27 21:51:55.927186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.894 [2024-11-27 21:51:55.927195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:32.894 [2024-11-27 21:51:55.927201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:32.894 [2024-11-27 21:51:55.927208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.894 [2024-11-27 21:51:55.931738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.894 [2024-11-27 21:51:55.931768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:32.894 [2024-11-27 21:51:55.931776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.894 [2024-11-27 21:51:55.931783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.894 [2024-11-27 21:51:55.931845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.894 [2024-11-27 21:51:55.931854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:32.894 [2024-11-27 21:51:55.931860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.894 [2024-11-27 21:51:55.931868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.894 [2024-11-27 21:51:55.931899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.894 [2024-11-27 21:51:55.931907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:32.894 [2024-11-27 21:51:55.931913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.894 [2024-11-27 21:51:55.931920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.894 [2024-11-27 21:51:55.931934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.894 [2024-11-27 21:51:55.931944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:32.894 [2024-11-27 21:51:55.931952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.894 [2024-11-27 21:51:55.931959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.894 [2024-11-27 21:51:55.940058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.894 [2024-11-27 21:51:55.940098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:32.894 [2024-11-27 21:51:55.940106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.894 [2024-11-27 21:51:55.940117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.894 [2024-11-27 21:51:55.946286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.894 [2024-11-27 21:51:55.946321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:32.894 [2024-11-27 21:51:55.946329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.894 [2024-11-27 21:51:55.946349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.894 [2024-11-27 21:51:55.946395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.894 [2024-11-27 21:51:55.946404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:32.894 [2024-11-27 21:51:55.946410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.895 [2024-11-27 21:51:55.946417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.895 [2024-11-27 21:51:55.946444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.895 [2024-11-27 21:51:55.946452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:32.895 [2024-11-27 21:51:55.946458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.895 [2024-11-27 21:51:55.946465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.895 [2024-11-27 21:51:55.946516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.895 [2024-11-27 21:51:55.946526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:32.895 [2024-11-27 21:51:55.946532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.895 [2024-11-27 21:51:55.946539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.895 [2024-11-27 21:51:55.946566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.895 [2024-11-27 21:51:55.946583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:32.895 [2024-11-27 21:51:55.946589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.895 [2024-11-27 21:51:55.946598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.895 [2024-11-27 21:51:55.946628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.895 [2024-11-27 21:51:55.946649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:32.895 [2024-11-27 21:51:55.946655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.895 [2024-11-27 21:51:55.946662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.895 [2024-11-27 21:51:55.946697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.895 [2024-11-27 21:51:55.946709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:32.895 [2024-11-27 21:51:55.946716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.895 [2024-11-27 21:51:55.946722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.895 [2024-11-27 21:51:55.946828] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 45.161 ms, result 0 00:19:33.155 21:51:56 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:33.155 21:51:56 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:33.155 [2024-11-27 21:51:56.163757] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:19:33.155 [2024-11-27 21:51:56.163864] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87456 ] 00:19:33.414 [2024-11-27 21:51:56.304206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.414 [2024-11-27 21:51:56.327323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.414 [2024-11-27 21:51:56.415162] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:33.414 [2024-11-27 21:51:56.415215] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:33.674 [2024-11-27 21:51:56.557554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.674 [2024-11-27 21:51:56.557590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:33.674 [2024-11-27 21:51:56.557599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:33.674 [2024-11-27 21:51:56.557605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.674 [2024-11-27 21:51:56.559355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.674 [2024-11-27 21:51:56.559387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:33.674 [2024-11-27 21:51:56.559394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.734 ms 00:19:33.674 [2024-11-27 21:51:56.559400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.674 [2024-11-27 21:51:56.559462] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:33.674 [2024-11-27 21:51:56.559629] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:33.674 [2024-11-27 21:51:56.559651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.674 [2024-11-27 21:51:56.559657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:33.674 [2024-11-27 21:51:56.559664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:19:33.674 [2024-11-27 21:51:56.559670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.674 [2024-11-27 21:51:56.560679] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:33.674 [2024-11-27 21:51:56.562578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.674 [2024-11-27 21:51:56.562608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:33.674 [2024-11-27 21:51:56.562616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.900 ms 00:19:33.674 [2024-11-27 21:51:56.562624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.674 [2024-11-27 21:51:56.562671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.674 [2024-11-27 21:51:56.562679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:33.674 [2024-11-27 21:51:56.562685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:33.674 [2024-11-27 21:51:56.562690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.674 [2024-11-27 21:51:56.567020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.674 [2024-11-27 21:51:56.567053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:33.674 [2024-11-27 21:51:56.567060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.297 ms 00:19:33.674 [2024-11-27 21:51:56.567066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.674 [2024-11-27 21:51:56.567157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.674 [2024-11-27 21:51:56.567166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:33.674 [2024-11-27 21:51:56.567172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:19:33.674 [2024-11-27 21:51:56.567181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.674 [2024-11-27 21:51:56.567200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.674 [2024-11-27 21:51:56.567209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:33.674 [2024-11-27 21:51:56.567215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:33.674 [2024-11-27 21:51:56.567220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.674 [2024-11-27 21:51:56.567235] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:33.674 [2024-11-27 21:51:56.568377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.674 [2024-11-27 21:51:56.568401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:33.674 [2024-11-27 21:51:56.568407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.145 ms 00:19:33.674 [2024-11-27 21:51:56.568416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.674 [2024-11-27 21:51:56.568442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.674 [2024-11-27 21:51:56.568449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:33.674 [2024-11-27 21:51:56.568460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:33.674 [2024-11-27 21:51:56.568465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.674 [2024-11-27 21:51:56.568480] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:33.674 [2024-11-27 21:51:56.568493] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:33.674 [2024-11-27 21:51:56.568524] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:33.674 [2024-11-27 21:51:56.568540] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:33.674 [2024-11-27 21:51:56.568618] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:33.674 [2024-11-27 21:51:56.568626] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:33.674 [2024-11-27 21:51:56.568634] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:33.674 [2024-11-27 21:51:56.568642] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:33.674 [2024-11-27 21:51:56.568648] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:33.674 [2024-11-27 21:51:56.568655] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:33.674 [2024-11-27 21:51:56.568660] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:33.674 [2024-11-27 21:51:56.568666] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:33.674 [2024-11-27 21:51:56.568671] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:33.674 [2024-11-27 21:51:56.568678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.674 [2024-11-27 21:51:56.568686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:33.674 [2024-11-27 21:51:56.568692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:19:33.674 [2024-11-27 21:51:56.568701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.674 [2024-11-27 21:51:56.568768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.674 [2024-11-27 21:51:56.568780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:33.674 [2024-11-27 21:51:56.568786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:33.675 [2024-11-27 21:51:56.568792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.675 [2024-11-27 21:51:56.568872] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:33.675 [2024-11-27 21:51:56.568883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:33.675 [2024-11-27 21:51:56.568889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:33.675 [2024-11-27 21:51:56.568898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:33.675 [2024-11-27 21:51:56.568903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:33.675 [2024-11-27 21:51:56.568909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:33.675 [2024-11-27 21:51:56.568915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:33.675 [2024-11-27 21:51:56.568920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:33.675 [2024-11-27 21:51:56.568927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:33.675 [2024-11-27 21:51:56.568932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:33.675 [2024-11-27 21:51:56.568937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:33.675 [2024-11-27 21:51:56.568942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:33.675 [2024-11-27 21:51:56.568947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:33.675 [2024-11-27 21:51:56.568954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:33.675 [2024-11-27 21:51:56.568959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:33.675 [2024-11-27 21:51:56.568964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:33.675 [2024-11-27 21:51:56.568969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:33.675 [2024-11-27 21:51:56.568974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:33.675 [2024-11-27 21:51:56.568979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:33.675 [2024-11-27 21:51:56.568983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:33.675 [2024-11-27 21:51:56.568988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:33.675 [2024-11-27 21:51:56.568993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:33.675 [2024-11-27 21:51:56.568998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:33.675 [2024-11-27 21:51:56.569002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:33.675 [2024-11-27 21:51:56.569010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:33.675 [2024-11-27 21:51:56.569015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:33.675 [2024-11-27 21:51:56.569019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:33.675 [2024-11-27 21:51:56.569024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:33.675 [2024-11-27 21:51:56.569030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:33.675 [2024-11-27 21:51:56.569036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:33.675 [2024-11-27 21:51:56.569042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:33.675 [2024-11-27 21:51:56.569047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:33.675 [2024-11-27 21:51:56.569053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:33.675 [2024-11-27 21:51:56.569059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:33.675 [2024-11-27 21:51:56.569064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:33.675 [2024-11-27 21:51:56.569070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:33.675 [2024-11-27 21:51:56.569075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:33.675 [2024-11-27 21:51:56.569082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:33.675 [2024-11-27 21:51:56.569088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:33.675 [2024-11-27 21:51:56.569093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:33.675 [2024-11-27 21:51:56.569100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:33.675 [2024-11-27 21:51:56.569105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:33.675 [2024-11-27 21:51:56.569111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:33.675 [2024-11-27 21:51:56.569117] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:33.675 [2024-11-27 21:51:56.569123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:33.675 [2024-11-27 21:51:56.569130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:33.675 [2024-11-27 21:51:56.569136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:33.675 [2024-11-27 21:51:56.569143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:33.675 [2024-11-27 21:51:56.569149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:33.675 [2024-11-27 21:51:56.569155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:33.675 [2024-11-27 21:51:56.569161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:33.675 [2024-11-27 21:51:56.569167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:33.675 [2024-11-27 21:51:56.569172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:33.675 [2024-11-27 21:51:56.569179] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:33.675 [2024-11-27 21:51:56.569187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:33.675 [2024-11-27 21:51:56.569194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:33.675 [2024-11-27 21:51:56.569201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:33.675 [2024-11-27 21:51:56.569208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:33.675 [2024-11-27 21:51:56.569214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:33.675 [2024-11-27 21:51:56.569220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:33.675 [2024-11-27 21:51:56.569226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:33.675 [2024-11-27 21:51:56.569232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:33.675 [2024-11-27 21:51:56.569243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:33.675 [2024-11-27 21:51:56.569249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:33.675 [2024-11-27 21:51:56.569256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:33.675 [2024-11-27 21:51:56.569262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:33.675 [2024-11-27 21:51:56.569268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:33.675 [2024-11-27 21:51:56.569274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:33.675 [2024-11-27 21:51:56.569281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:33.675 [2024-11-27 21:51:56.569287] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:33.675 [2024-11-27 21:51:56.569296] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:33.675 [2024-11-27 21:51:56.569303] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:33.675 [2024-11-27 21:51:56.569311] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:33.675 [2024-11-27 21:51:56.569317] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:33.676 [2024-11-27 21:51:56.569324] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:33.676 [2024-11-27 21:51:56.569330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.676 [2024-11-27 21:51:56.569347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:33.676 [2024-11-27 21:51:56.569357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.510 ms 00:19:33.676 [2024-11-27 21:51:56.569363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.676 [2024-11-27 21:51:56.577117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.676 [2024-11-27 21:51:56.577149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:33.676 [2024-11-27 21:51:56.577158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.716 ms 00:19:33.676 [2024-11-27 21:51:56.577166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.676 [2024-11-27 21:51:56.577255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.676 [2024-11-27 21:51:56.577266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:33.676 [2024-11-27 21:51:56.577272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:19:33.676 [2024-11-27 21:51:56.577277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.676 [2024-11-27 21:51:56.599509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.676 [2024-11-27 21:51:56.599568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:33.676 [2024-11-27 21:51:56.599597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.212 ms 00:19:33.676 [2024-11-27 21:51:56.599611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.676 [2024-11-27 21:51:56.599734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.676 [2024-11-27 21:51:56.599754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:33.676 [2024-11-27 21:51:56.599770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:33.676 [2024-11-27 21:51:56.599783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.676 [2024-11-27 21:51:56.600175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.676 [2024-11-27 21:51:56.600227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:33.676 [2024-11-27 21:51:56.600244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.357 ms 00:19:33.676 [2024-11-27 21:51:56.600265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.676 [2024-11-27 21:51:56.600498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.676 [2024-11-27 21:51:56.600538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:33.676 [2024-11-27 21:51:56.600553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:19:33.676 [2024-11-27 21:51:56.600565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.676 [2024-11-27 21:51:56.606200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.676 [2024-11-27 21:51:56.606231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:33.676 [2024-11-27 21:51:56.606241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.602 ms 00:19:33.676 [2024-11-27 21:51:56.606248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.676 [2024-11-27 21:51:56.608800] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:33.676 [2024-11-27 21:51:56.608834] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:33.676 [2024-11-27 21:51:56.608844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.676 [2024-11-27 21:51:56.608852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:33.676 [2024-11-27 21:51:56.608860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.489 ms 00:19:33.676 [2024-11-27 21:51:56.608867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.676 [2024-11-27 21:51:56.621011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.676 [2024-11-27 21:51:56.621041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:33.676 [2024-11-27 21:51:56.621054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.101 ms 00:19:33.676 [2024-11-27 21:51:56.621060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.676 [2024-11-27 21:51:56.622755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.676 [2024-11-27 21:51:56.622782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:33.676 [2024-11-27 21:51:56.622789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.642 ms 00:19:33.676 [2024-11-27 21:51:56.622795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.676 [2024-11-27 21:51:56.624303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.676 [2024-11-27 21:51:56.624330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:33.676 [2024-11-27 21:51:56.624351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.478 ms 00:19:33.676 [2024-11-27 21:51:56.624357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.676 [2024-11-27 21:51:56.624599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.676 [2024-11-27 21:51:56.624618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:33.676 [2024-11-27 21:51:56.624628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:19:33.676 [2024-11-27 21:51:56.624633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.676 [2024-11-27 21:51:56.639026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.676 [2024-11-27 21:51:56.639062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:33.676 [2024-11-27 21:51:56.639071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.376 ms 00:19:33.676 [2024-11-27 21:51:56.639082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.676 [2024-11-27 21:51:56.644935] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:33.676 [2024-11-27 21:51:56.656716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.676 [2024-11-27 21:51:56.656751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:33.676 [2024-11-27 21:51:56.656760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.578 ms 00:19:33.676 [2024-11-27 21:51:56.656766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.676 [2024-11-27 21:51:56.656838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.676 [2024-11-27 21:51:56.656851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:33.676 [2024-11-27 21:51:56.656857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:33.676 [2024-11-27 21:51:56.656863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.676 [2024-11-27 21:51:56.656895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.676 [2024-11-27 21:51:56.656907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:33.676 [2024-11-27 21:51:56.656914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:33.676 [2024-11-27 21:51:56.656919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.676 [2024-11-27 21:51:56.656937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.676 [2024-11-27 21:51:56.656945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:33.676 [2024-11-27 21:51:56.656953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:33.676 [2024-11-27 21:51:56.656958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.676 [2024-11-27 21:51:56.656980] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:33.676 [2024-11-27 21:51:56.656986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.676 [2024-11-27 21:51:56.656993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:33.676 [2024-11-27 21:51:56.656998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:33.676 [2024-11-27 21:51:56.657004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.676 [2024-11-27 21:51:56.660157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.676 [2024-11-27 21:51:56.660188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:33.676 [2024-11-27 21:51:56.660195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.140 ms 00:19:33.677 [2024-11-27 21:51:56.660206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.677 [2024-11-27 21:51:56.660264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.677 [2024-11-27 21:51:56.660272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:33.677 [2024-11-27 21:51:56.660281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:33.677 [2024-11-27 21:51:56.660287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.677 [2024-11-27 21:51:56.660929] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:33.677 [2024-11-27 21:51:56.661775] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 103.167 ms, result 0 00:19:33.677 [2024-11-27 21:51:56.662434] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:33.677 [2024-11-27 21:51:56.670228] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:34.618  [2024-11-27T21:51:58.684Z] Copying: 22/256 [MB] (22 MBps) [2024-11-27T21:52:00.069Z] Copying: 38/256 [MB] (16 MBps) [2024-11-27T21:52:01.011Z] Copying: 56/256 [MB] (17 MBps) [2024-11-27T21:52:01.953Z] Copying: 77/256 [MB] (20 MBps) [2024-11-27T21:52:02.910Z] Copying: 92/256 [MB] (14 MBps) [2024-11-27T21:52:03.857Z] Copying: 109/256 [MB] (16 MBps) [2024-11-27T21:52:04.802Z] Copying: 133/256 [MB] (23 MBps) [2024-11-27T21:52:05.746Z] Copying: 151/256 [MB] (18 MBps) [2024-11-27T21:52:06.691Z] Copying: 171/256 [MB] (20 MBps) [2024-11-27T21:52:08.078Z] Copying: 191/256 [MB] (19 MBps) [2024-11-27T21:52:08.709Z] Copying: 207/256 [MB] (16 MBps) [2024-11-27T21:52:10.114Z] Copying: 227/256 [MB] (19 MBps) [2024-11-27T21:52:10.376Z] Copying: 248/256 [MB] (21 MBps) [2024-11-27T21:52:10.376Z] Copying: 256/256 [MB] (average 18 MBps)[2024-11-27 21:52:10.146553] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:47.255 [2024-11-27 21:52:10.148433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.255 [2024-11-27 21:52:10.148483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:47.255 [2024-11-27 21:52:10.148498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:47.255 [2024-11-27 21:52:10.148507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.255 [2024-11-27 21:52:10.148529] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:47.255 [2024-11-27 21:52:10.149208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.255 [2024-11-27 21:52:10.149245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:47.255 [2024-11-27 21:52:10.149256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.666 ms 00:19:47.255 [2024-11-27 21:52:10.149265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.255 [2024-11-27 21:52:10.149546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.255 [2024-11-27 21:52:10.149585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:47.255 [2024-11-27 21:52:10.149595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:19:47.255 [2024-11-27 21:52:10.149604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.255 [2024-11-27 21:52:10.153305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.255 [2024-11-27 21:52:10.153329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:47.255 [2024-11-27 21:52:10.153356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.663 ms 00:19:47.255 [2024-11-27 21:52:10.153364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.255 [2024-11-27 21:52:10.160593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.255 [2024-11-27 21:52:10.160637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:47.255 [2024-11-27 21:52:10.160655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.193 ms 00:19:47.255 [2024-11-27 21:52:10.160663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.255 [2024-11-27 21:52:10.163686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.255 [2024-11-27 21:52:10.163735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:47.255 [2024-11-27 21:52:10.163745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.949 ms 00:19:47.255 [2024-11-27 21:52:10.163752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.255 [2024-11-27 21:52:10.168682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.255 [2024-11-27 21:52:10.168731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:47.255 [2024-11-27 21:52:10.168741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.870 ms 00:19:47.255 [2024-11-27 21:52:10.168749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.255 [2024-11-27 21:52:10.168881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.255 [2024-11-27 21:52:10.168892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:47.255 [2024-11-27 21:52:10.168914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:19:47.255 [2024-11-27 21:52:10.168922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.255 [2024-11-27 21:52:10.172064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.255 [2024-11-27 21:52:10.172111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:47.255 [2024-11-27 21:52:10.172121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.124 ms 00:19:47.255 [2024-11-27 21:52:10.172129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.255 [2024-11-27 21:52:10.175065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.255 [2024-11-27 21:52:10.175110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:47.255 [2024-11-27 21:52:10.175119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.894 ms 00:19:47.255 [2024-11-27 21:52:10.175127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.255 [2024-11-27 21:52:10.177512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.255 [2024-11-27 21:52:10.177557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:47.255 [2024-11-27 21:52:10.177566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.344 ms 00:19:47.255 [2024-11-27 21:52:10.177573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.255 [2024-11-27 21:52:10.179803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.255 [2024-11-27 21:52:10.179849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:47.255 [2024-11-27 21:52:10.179858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.144 ms 00:19:47.255 [2024-11-27 21:52:10.179865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.255 [2024-11-27 21:52:10.179904] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:47.255 [2024-11-27 21:52:10.179919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:47.255 [2024-11-27 21:52:10.179938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:47.255 [2024-11-27 21:52:10.179946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:47.255 [2024-11-27 21:52:10.179954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:47.255 [2024-11-27 21:52:10.179962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:47.255 [2024-11-27 21:52:10.179971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:47.255 [2024-11-27 21:52:10.179978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:47.255 [2024-11-27 21:52:10.179986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:47.255 [2024-11-27 21:52:10.179994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:47.255 [2024-11-27 21:52:10.180001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:47.255 [2024-11-27 21:52:10.180008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:47.255 [2024-11-27 21:52:10.180016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:47.255 [2024-11-27 21:52:10.180023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:47.255 [2024-11-27 21:52:10.180031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:47.256 [2024-11-27 21:52:10.180727] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:47.256 [2024-11-27 21:52:10.180735] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 78ee2df8-80e5-4b2c-bba3-34ec78c869a1 00:19:47.256 [2024-11-27 21:52:10.180743] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:47.257 [2024-11-27 21:52:10.180750] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:47.257 [2024-11-27 21:52:10.180757] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:47.257 [2024-11-27 21:52:10.180765] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:47.257 [2024-11-27 21:52:10.180777] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:47.257 [2024-11-27 21:52:10.180785] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:47.257 [2024-11-27 21:52:10.180792] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:47.257 [2024-11-27 21:52:10.180799] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:47.257 [2024-11-27 21:52:10.180805] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:47.257 [2024-11-27 21:52:10.180812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.257 [2024-11-27 21:52:10.180819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:47.257 [2024-11-27 21:52:10.180829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.909 ms 00:19:47.257 [2024-11-27 21:52:10.180836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.257 [2024-11-27 21:52:10.183137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.257 [2024-11-27 21:52:10.183173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:47.257 [2024-11-27 21:52:10.183186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.282 ms 00:19:47.257 [2024-11-27 21:52:10.183194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.257 [2024-11-27 21:52:10.183312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.257 [2024-11-27 21:52:10.183321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:47.257 [2024-11-27 21:52:10.183330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:19:47.257 [2024-11-27 21:52:10.183357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.257 [2024-11-27 21:52:10.191016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.257 [2024-11-27 21:52:10.191069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:47.257 [2024-11-27 21:52:10.191085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.257 [2024-11-27 21:52:10.191094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.257 [2024-11-27 21:52:10.191176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.257 [2024-11-27 21:52:10.191185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:47.257 [2024-11-27 21:52:10.191198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.257 [2024-11-27 21:52:10.191206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.257 [2024-11-27 21:52:10.191251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.257 [2024-11-27 21:52:10.191260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:47.257 [2024-11-27 21:52:10.191268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.257 [2024-11-27 21:52:10.191278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.257 [2024-11-27 21:52:10.191295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.257 [2024-11-27 21:52:10.191303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:47.257 [2024-11-27 21:52:10.191315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.257 [2024-11-27 21:52:10.191322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.257 [2024-11-27 21:52:10.204730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.257 [2024-11-27 21:52:10.204784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:47.257 [2024-11-27 21:52:10.204801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.257 [2024-11-27 21:52:10.204809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.257 [2024-11-27 21:52:10.214950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.257 [2024-11-27 21:52:10.215002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:47.257 [2024-11-27 21:52:10.215014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.257 [2024-11-27 21:52:10.215022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.257 [2024-11-27 21:52:10.215070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.257 [2024-11-27 21:52:10.215079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:47.257 [2024-11-27 21:52:10.215088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.257 [2024-11-27 21:52:10.215096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.257 [2024-11-27 21:52:10.215134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.257 [2024-11-27 21:52:10.215142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:47.257 [2024-11-27 21:52:10.215151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.257 [2024-11-27 21:52:10.215158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.257 [2024-11-27 21:52:10.215226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.257 [2024-11-27 21:52:10.215236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:47.257 [2024-11-27 21:52:10.215251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.257 [2024-11-27 21:52:10.215259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.257 [2024-11-27 21:52:10.215291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.257 [2024-11-27 21:52:10.215304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:47.257 [2024-11-27 21:52:10.215312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.257 [2024-11-27 21:52:10.215319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.257 [2024-11-27 21:52:10.215375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.257 [2024-11-27 21:52:10.215385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:47.257 [2024-11-27 21:52:10.215394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.257 [2024-11-27 21:52:10.215402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.257 [2024-11-27 21:52:10.215449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.257 [2024-11-27 21:52:10.215459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:47.257 [2024-11-27 21:52:10.215468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.257 [2024-11-27 21:52:10.215476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.257 [2024-11-27 21:52:10.215626] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 67.163 ms, result 0 00:19:47.519 00:19:47.519 00:19:47.519 21:52:10 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:19:47.519 21:52:10 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:48.091 21:52:10 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:48.091 [2024-11-27 21:52:11.039173] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:19:48.091 [2024-11-27 21:52:11.039329] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87616 ] 00:19:48.091 [2024-11-27 21:52:11.182730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.091 [2024-11-27 21:52:11.202175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.353 [2024-11-27 21:52:11.292853] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:48.353 [2024-11-27 21:52:11.292921] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:48.353 [2024-11-27 21:52:11.449421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.353 [2024-11-27 21:52:11.449466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:48.353 [2024-11-27 21:52:11.449484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:48.353 [2024-11-27 21:52:11.449492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.353 [2024-11-27 21:52:11.451819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.353 [2024-11-27 21:52:11.451859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:48.353 [2024-11-27 21:52:11.451869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.304 ms 00:19:48.353 [2024-11-27 21:52:11.451879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.353 [2024-11-27 21:52:11.451958] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:48.353 [2024-11-27 21:52:11.452479] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:48.353 [2024-11-27 21:52:11.452521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.353 [2024-11-27 21:52:11.452531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:48.353 [2024-11-27 21:52:11.452541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.576 ms 00:19:48.353 [2024-11-27 21:52:11.452549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.353 [2024-11-27 21:52:11.453726] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:48.353 [2024-11-27 21:52:11.456280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.353 [2024-11-27 21:52:11.456315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:48.353 [2024-11-27 21:52:11.456328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.556 ms 00:19:48.353 [2024-11-27 21:52:11.456353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.353 [2024-11-27 21:52:11.456408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.354 [2024-11-27 21:52:11.456418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:48.354 [2024-11-27 21:52:11.456430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:48.354 [2024-11-27 21:52:11.456437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.354 [2024-11-27 21:52:11.461368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.354 [2024-11-27 21:52:11.461397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:48.354 [2024-11-27 21:52:11.461410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.893 ms 00:19:48.354 [2024-11-27 21:52:11.461417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.354 [2024-11-27 21:52:11.461519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.354 [2024-11-27 21:52:11.461533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:48.354 [2024-11-27 21:52:11.461541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:19:48.354 [2024-11-27 21:52:11.461550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.354 [2024-11-27 21:52:11.461579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.354 [2024-11-27 21:52:11.461591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:48.354 [2024-11-27 21:52:11.461599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:48.354 [2024-11-27 21:52:11.461605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.354 [2024-11-27 21:52:11.461638] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:48.354 [2024-11-27 21:52:11.462994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.354 [2024-11-27 21:52:11.463022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:48.354 [2024-11-27 21:52:11.463031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.362 ms 00:19:48.354 [2024-11-27 21:52:11.463043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.354 [2024-11-27 21:52:11.463083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.354 [2024-11-27 21:52:11.463094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:48.354 [2024-11-27 21:52:11.463101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:48.354 [2024-11-27 21:52:11.463108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.354 [2024-11-27 21:52:11.463125] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:48.354 [2024-11-27 21:52:11.463141] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:48.354 [2024-11-27 21:52:11.463179] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:48.354 [2024-11-27 21:52:11.463196] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:48.354 [2024-11-27 21:52:11.463297] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:48.354 [2024-11-27 21:52:11.463307] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:48.354 [2024-11-27 21:52:11.463317] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:48.354 [2024-11-27 21:52:11.463327] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:48.354 [2024-11-27 21:52:11.463355] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:48.354 [2024-11-27 21:52:11.463363] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:48.354 [2024-11-27 21:52:11.463371] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:48.354 [2024-11-27 21:52:11.463377] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:48.354 [2024-11-27 21:52:11.463384] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:48.354 [2024-11-27 21:52:11.463399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.354 [2024-11-27 21:52:11.463406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:48.354 [2024-11-27 21:52:11.463414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:19:48.354 [2024-11-27 21:52:11.463420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.354 [2024-11-27 21:52:11.463506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.354 [2024-11-27 21:52:11.463519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:48.354 [2024-11-27 21:52:11.463526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:19:48.354 [2024-11-27 21:52:11.463534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.354 [2024-11-27 21:52:11.463634] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:48.354 [2024-11-27 21:52:11.463649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:48.354 [2024-11-27 21:52:11.463657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:48.354 [2024-11-27 21:52:11.463669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:48.354 [2024-11-27 21:52:11.463677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:48.354 [2024-11-27 21:52:11.463685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:48.354 [2024-11-27 21:52:11.463692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:48.354 [2024-11-27 21:52:11.463704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:48.354 [2024-11-27 21:52:11.463712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:48.354 [2024-11-27 21:52:11.463719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:48.354 [2024-11-27 21:52:11.463727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:48.354 [2024-11-27 21:52:11.463734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:48.354 [2024-11-27 21:52:11.463743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:48.354 [2024-11-27 21:52:11.463750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:48.354 [2024-11-27 21:52:11.463758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:48.354 [2024-11-27 21:52:11.463765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:48.354 [2024-11-27 21:52:11.463772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:48.354 [2024-11-27 21:52:11.463780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:48.354 [2024-11-27 21:52:11.463788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:48.354 [2024-11-27 21:52:11.463796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:48.354 [2024-11-27 21:52:11.463803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:48.354 [2024-11-27 21:52:11.463810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:48.354 [2024-11-27 21:52:11.463818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:48.354 [2024-11-27 21:52:11.463829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:48.354 [2024-11-27 21:52:11.463836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:48.354 [2024-11-27 21:52:11.463844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:48.354 [2024-11-27 21:52:11.463851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:48.354 [2024-11-27 21:52:11.463858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:48.354 [2024-11-27 21:52:11.463865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:48.354 [2024-11-27 21:52:11.463873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:48.354 [2024-11-27 21:52:11.463880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:48.354 [2024-11-27 21:52:11.463887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:48.354 [2024-11-27 21:52:11.463894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:48.354 [2024-11-27 21:52:11.463901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:48.354 [2024-11-27 21:52:11.463908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:48.354 [2024-11-27 21:52:11.463915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:48.354 [2024-11-27 21:52:11.463923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:48.354 [2024-11-27 21:52:11.463930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:48.354 [2024-11-27 21:52:11.463937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:48.354 [2024-11-27 21:52:11.463946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:48.354 [2024-11-27 21:52:11.463954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:48.354 [2024-11-27 21:52:11.463962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:48.354 [2024-11-27 21:52:11.463969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:48.354 [2024-11-27 21:52:11.463976] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:48.354 [2024-11-27 21:52:11.463984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:48.354 [2024-11-27 21:52:11.463992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:48.354 [2024-11-27 21:52:11.464000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:48.354 [2024-11-27 21:52:11.464008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:48.354 [2024-11-27 21:52:11.464015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:48.354 [2024-11-27 21:52:11.464023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:48.354 [2024-11-27 21:52:11.464031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:48.354 [2024-11-27 21:52:11.464038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:48.354 [2024-11-27 21:52:11.464047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:48.354 [2024-11-27 21:52:11.464056] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:48.354 [2024-11-27 21:52:11.464065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:48.354 [2024-11-27 21:52:11.464076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:48.354 [2024-11-27 21:52:11.464084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:48.355 [2024-11-27 21:52:11.464092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:48.355 [2024-11-27 21:52:11.464100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:48.355 [2024-11-27 21:52:11.464108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:48.355 [2024-11-27 21:52:11.464116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:48.355 [2024-11-27 21:52:11.464125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:48.355 [2024-11-27 21:52:11.464137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:48.355 [2024-11-27 21:52:11.464146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:48.355 [2024-11-27 21:52:11.464154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:48.355 [2024-11-27 21:52:11.464162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:48.355 [2024-11-27 21:52:11.464170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:48.355 [2024-11-27 21:52:11.464177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:48.355 [2024-11-27 21:52:11.464185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:48.355 [2024-11-27 21:52:11.464192] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:48.355 [2024-11-27 21:52:11.464202] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:48.355 [2024-11-27 21:52:11.464214] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:48.355 [2024-11-27 21:52:11.464222] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:48.355 [2024-11-27 21:52:11.464229] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:48.355 [2024-11-27 21:52:11.464236] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:48.355 [2024-11-27 21:52:11.464243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.355 [2024-11-27 21:52:11.464250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:48.355 [2024-11-27 21:52:11.464258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.678 ms 00:19:48.355 [2024-11-27 21:52:11.464264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.617 [2024-11-27 21:52:11.473241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.617 [2024-11-27 21:52:11.473276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:48.617 [2024-11-27 21:52:11.473286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.920 ms 00:19:48.617 [2024-11-27 21:52:11.473293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.617 [2024-11-27 21:52:11.473409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.617 [2024-11-27 21:52:11.473423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:48.617 [2024-11-27 21:52:11.473431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:19:48.617 [2024-11-27 21:52:11.473438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.617 [2024-11-27 21:52:11.491376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.617 [2024-11-27 21:52:11.491417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:48.617 [2024-11-27 21:52:11.491428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.917 ms 00:19:48.617 [2024-11-27 21:52:11.491436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.617 [2024-11-27 21:52:11.491514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.617 [2024-11-27 21:52:11.491526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:48.617 [2024-11-27 21:52:11.491534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:48.617 [2024-11-27 21:52:11.491546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.617 [2024-11-27 21:52:11.491879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.617 [2024-11-27 21:52:11.491908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:48.617 [2024-11-27 21:52:11.491917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:19:48.617 [2024-11-27 21:52:11.491924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.617 [2024-11-27 21:52:11.492054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.617 [2024-11-27 21:52:11.492069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:48.617 [2024-11-27 21:52:11.492078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:19:48.617 [2024-11-27 21:52:11.492088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.617 [2024-11-27 21:52:11.497910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.617 [2024-11-27 21:52:11.497944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:48.617 [2024-11-27 21:52:11.497954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.799 ms 00:19:48.617 [2024-11-27 21:52:11.497967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.617 [2024-11-27 21:52:11.500893] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:48.617 [2024-11-27 21:52:11.500934] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:48.617 [2024-11-27 21:52:11.500947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.617 [2024-11-27 21:52:11.500955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:48.617 [2024-11-27 21:52:11.500963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.885 ms 00:19:48.617 [2024-11-27 21:52:11.500971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.617 [2024-11-27 21:52:11.516094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.617 [2024-11-27 21:52:11.516129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:48.617 [2024-11-27 21:52:11.516146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.075 ms 00:19:48.617 [2024-11-27 21:52:11.516154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.617 [2024-11-27 21:52:11.518308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.617 [2024-11-27 21:52:11.518352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:48.617 [2024-11-27 21:52:11.518362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.084 ms 00:19:48.617 [2024-11-27 21:52:11.518369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.617 [2024-11-27 21:52:11.520390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.617 [2024-11-27 21:52:11.520427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:48.617 [2024-11-27 21:52:11.520436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.982 ms 00:19:48.617 [2024-11-27 21:52:11.520443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.617 [2024-11-27 21:52:11.520775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.617 [2024-11-27 21:52:11.520794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:48.617 [2024-11-27 21:52:11.520803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:19:48.617 [2024-11-27 21:52:11.520810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.617 [2024-11-27 21:52:11.538413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.617 [2024-11-27 21:52:11.538454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:48.617 [2024-11-27 21:52:11.538464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.575 ms 00:19:48.617 [2024-11-27 21:52:11.538473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.617 [2024-11-27 21:52:11.546122] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:48.617 [2024-11-27 21:52:11.561732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.617 [2024-11-27 21:52:11.561776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:48.617 [2024-11-27 21:52:11.561789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.186 ms 00:19:48.617 [2024-11-27 21:52:11.561797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.617 [2024-11-27 21:52:11.561893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.617 [2024-11-27 21:52:11.561904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:48.617 [2024-11-27 21:52:11.561916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:48.617 [2024-11-27 21:52:11.561928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.617 [2024-11-27 21:52:11.561977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.617 [2024-11-27 21:52:11.561986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:48.617 [2024-11-27 21:52:11.561994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:19:48.617 [2024-11-27 21:52:11.562001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.617 [2024-11-27 21:52:11.562025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.617 [2024-11-27 21:52:11.562034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:48.617 [2024-11-27 21:52:11.562045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:48.617 [2024-11-27 21:52:11.562055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.617 [2024-11-27 21:52:11.562093] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:48.617 [2024-11-27 21:52:11.562103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.617 [2024-11-27 21:52:11.562111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:48.617 [2024-11-27 21:52:11.562118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:48.617 [2024-11-27 21:52:11.562126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.617 [2024-11-27 21:52:11.566685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.617 [2024-11-27 21:52:11.566732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:48.617 [2024-11-27 21:52:11.566742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.541 ms 00:19:48.617 [2024-11-27 21:52:11.566750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.617 [2024-11-27 21:52:11.566832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.617 [2024-11-27 21:52:11.566842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:48.617 [2024-11-27 21:52:11.566855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:48.617 [2024-11-27 21:52:11.566865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.617 [2024-11-27 21:52:11.567727] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:48.617 [2024-11-27 21:52:11.568841] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 118.018 ms, result 0 00:19:48.617 [2024-11-27 21:52:11.570042] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:48.617 [2024-11-27 21:52:11.578299] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:48.880  [2024-11-27T21:52:12.001Z] Copying: 4096/4096 [kB] (average 18 MBps)[2024-11-27 21:52:11.794072] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:48.880 [2024-11-27 21:52:11.795190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.880 [2024-11-27 21:52:11.795240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:48.880 [2024-11-27 21:52:11.795251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:48.880 [2024-11-27 21:52:11.795265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.880 [2024-11-27 21:52:11.795287] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:48.880 [2024-11-27 21:52:11.795982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.880 [2024-11-27 21:52:11.796021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:48.880 [2024-11-27 21:52:11.796032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.681 ms 00:19:48.880 [2024-11-27 21:52:11.796041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.880 [2024-11-27 21:52:11.798088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.880 [2024-11-27 21:52:11.798136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:48.880 [2024-11-27 21:52:11.798154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.019 ms 00:19:48.880 [2024-11-27 21:52:11.798167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.880 [2024-11-27 21:52:11.802522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.880 [2024-11-27 21:52:11.802566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:48.880 [2024-11-27 21:52:11.802576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.338 ms 00:19:48.880 [2024-11-27 21:52:11.802584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.880 [2024-11-27 21:52:11.809485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.880 [2024-11-27 21:52:11.809526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:48.881 [2024-11-27 21:52:11.809536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.867 ms 00:19:48.881 [2024-11-27 21:52:11.809550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.881 [2024-11-27 21:52:11.812261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.881 [2024-11-27 21:52:11.812311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:48.881 [2024-11-27 21:52:11.812321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.662 ms 00:19:48.881 [2024-11-27 21:52:11.812328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.881 [2024-11-27 21:52:11.817400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.881 [2024-11-27 21:52:11.817450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:48.881 [2024-11-27 21:52:11.817461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.999 ms 00:19:48.881 [2024-11-27 21:52:11.817468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.881 [2024-11-27 21:52:11.817600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.881 [2024-11-27 21:52:11.817611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:48.881 [2024-11-27 21:52:11.817638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:19:48.881 [2024-11-27 21:52:11.817646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.881 [2024-11-27 21:52:11.820750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.881 [2024-11-27 21:52:11.820798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:48.881 [2024-11-27 21:52:11.820807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.086 ms 00:19:48.881 [2024-11-27 21:52:11.820814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.881 [2024-11-27 21:52:11.823912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.881 [2024-11-27 21:52:11.823961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:48.881 [2024-11-27 21:52:11.823970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.055 ms 00:19:48.881 [2024-11-27 21:52:11.823977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.881 [2024-11-27 21:52:11.825954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.881 [2024-11-27 21:52:11.826001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:48.881 [2024-11-27 21:52:11.826011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.934 ms 00:19:48.881 [2024-11-27 21:52:11.826017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.881 [2024-11-27 21:52:11.828495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.881 [2024-11-27 21:52:11.828542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:48.881 [2024-11-27 21:52:11.828551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.405 ms 00:19:48.881 [2024-11-27 21:52:11.828559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.881 [2024-11-27 21:52:11.828600] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:48.881 [2024-11-27 21:52:11.828624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.828992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.829000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.829008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.829015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.829022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.829030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.829038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.829044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.829052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.829059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.829066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.829073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.829080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.829089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:48.881 [2024-11-27 21:52:11.829097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:48.882 [2024-11-27 21:52:11.829104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:48.882 [2024-11-27 21:52:11.829111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:48.882 [2024-11-27 21:52:11.829120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:48.882 [2024-11-27 21:52:11.829127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:48.882 [2024-11-27 21:52:11.829135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:48.882 [2024-11-27 21:52:11.829144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:48.882 [2024-11-27 21:52:11.829151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:48.882 [2024-11-27 21:52:11.829160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:48.882 [2024-11-27 21:52:11.829167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:48.882 [2024-11-27 21:52:11.829175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:48.882 [2024-11-27 21:52:11.829182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:48.882 [2024-11-27 21:52:11.829189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:48.882 [2024-11-27 21:52:11.829196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:48.882 [2024-11-27 21:52:11.829204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:48.882 [2024-11-27 21:52:11.829212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:48.882 [2024-11-27 21:52:11.829220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:48.882 [2024-11-27 21:52:11.829228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:48.882 [2024-11-27 21:52:11.829235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:48.882 [2024-11-27 21:52:11.829242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:48.882 [2024-11-27 21:52:11.829249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:48.882 [2024-11-27 21:52:11.829256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:48.882 [2024-11-27 21:52:11.829264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:48.882 [2024-11-27 21:52:11.829271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:48.882 [2024-11-27 21:52:11.829278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:48.882 [2024-11-27 21:52:11.829286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:48.882 [2024-11-27 21:52:11.829293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:48.882 [2024-11-27 21:52:11.829301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:48.882 [2024-11-27 21:52:11.829309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:48.882 [2024-11-27 21:52:11.829316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:48.882 [2024-11-27 21:52:11.829323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:48.882 [2024-11-27 21:52:11.829332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:48.882 [2024-11-27 21:52:11.829357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:48.882 [2024-11-27 21:52:11.829365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:48.882 [2024-11-27 21:52:11.829373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:48.882 [2024-11-27 21:52:11.829382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:48.882 [2024-11-27 21:52:11.829390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:48.882 [2024-11-27 21:52:11.829397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:48.882 [2024-11-27 21:52:11.829413] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:48.882 [2024-11-27 21:52:11.829422] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 78ee2df8-80e5-4b2c-bba3-34ec78c869a1 00:19:48.882 [2024-11-27 21:52:11.829431] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:48.882 [2024-11-27 21:52:11.829446] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:48.882 [2024-11-27 21:52:11.829453] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:48.882 [2024-11-27 21:52:11.829461] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:48.882 [2024-11-27 21:52:11.829468] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:48.882 [2024-11-27 21:52:11.829479] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:48.882 [2024-11-27 21:52:11.829487] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:48.882 [2024-11-27 21:52:11.829493] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:48.882 [2024-11-27 21:52:11.829500] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:48.882 [2024-11-27 21:52:11.829507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.882 [2024-11-27 21:52:11.829514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:48.882 [2024-11-27 21:52:11.829523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.908 ms 00:19:48.882 [2024-11-27 21:52:11.829534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.882 [2024-11-27 21:52:11.831440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.882 [2024-11-27 21:52:11.831476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:48.882 [2024-11-27 21:52:11.831488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.887 ms 00:19:48.882 [2024-11-27 21:52:11.831504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.882 [2024-11-27 21:52:11.831622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.882 [2024-11-27 21:52:11.831633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:48.882 [2024-11-27 21:52:11.831643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:19:48.882 [2024-11-27 21:52:11.831651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.882 [2024-11-27 21:52:11.839466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.882 [2024-11-27 21:52:11.839513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:48.882 [2024-11-27 21:52:11.839530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.882 [2024-11-27 21:52:11.839543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.882 [2024-11-27 21:52:11.839620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.882 [2024-11-27 21:52:11.839629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:48.882 [2024-11-27 21:52:11.839637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.882 [2024-11-27 21:52:11.839645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.882 [2024-11-27 21:52:11.839701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.882 [2024-11-27 21:52:11.839711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:48.882 [2024-11-27 21:52:11.839719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.882 [2024-11-27 21:52:11.839727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.882 [2024-11-27 21:52:11.839748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.882 [2024-11-27 21:52:11.839757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:48.882 [2024-11-27 21:52:11.839764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.882 [2024-11-27 21:52:11.839771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.882 [2024-11-27 21:52:11.853305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.882 [2024-11-27 21:52:11.853375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:48.882 [2024-11-27 21:52:11.853388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.882 [2024-11-27 21:52:11.853403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.882 [2024-11-27 21:52:11.863479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.882 [2024-11-27 21:52:11.863530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:48.882 [2024-11-27 21:52:11.863541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.882 [2024-11-27 21:52:11.863550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.882 [2024-11-27 21:52:11.863630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.882 [2024-11-27 21:52:11.863640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:48.882 [2024-11-27 21:52:11.863650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.882 [2024-11-27 21:52:11.863659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.882 [2024-11-27 21:52:11.863693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.882 [2024-11-27 21:52:11.863703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:48.882 [2024-11-27 21:52:11.863711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.882 [2024-11-27 21:52:11.863720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.882 [2024-11-27 21:52:11.863792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.882 [2024-11-27 21:52:11.863802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:48.882 [2024-11-27 21:52:11.863817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.882 [2024-11-27 21:52:11.863826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.882 [2024-11-27 21:52:11.863861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.882 [2024-11-27 21:52:11.863874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:48.882 [2024-11-27 21:52:11.863882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.882 [2024-11-27 21:52:11.863890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.882 [2024-11-27 21:52:11.863930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.882 [2024-11-27 21:52:11.863940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:48.882 [2024-11-27 21:52:11.863948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.883 [2024-11-27 21:52:11.863956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.883 [2024-11-27 21:52:11.864004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.883 [2024-11-27 21:52:11.864015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:48.883 [2024-11-27 21:52:11.864024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.883 [2024-11-27 21:52:11.864032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.883 [2024-11-27 21:52:11.864188] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 68.958 ms, result 0 00:19:49.144 00:19:49.144 00:19:49.144 21:52:12 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=87630 00:19:49.144 21:52:12 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 87630 00:19:49.144 21:52:12 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:49.144 21:52:12 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 87630 ']' 00:19:49.144 21:52:12 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.144 21:52:12 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:49.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.144 21:52:12 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.144 21:52:12 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:49.144 21:52:12 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:49.144 [2024-11-27 21:52:12.146408] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:19:49.144 [2024-11-27 21:52:12.146560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87630 ] 00:19:49.405 [2024-11-27 21:52:12.294152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.405 [2024-11-27 21:52:12.323746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.977 21:52:13 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:49.977 21:52:13 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:49.977 21:52:13 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:50.238 [2024-11-27 21:52:13.215680] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:50.238 [2024-11-27 21:52:13.215769] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:50.501 [2024-11-27 21:52:13.373720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.501 [2024-11-27 21:52:13.373787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:50.501 [2024-11-27 21:52:13.373803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:50.501 [2024-11-27 21:52:13.373814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.501 [2024-11-27 21:52:13.376671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.501 [2024-11-27 21:52:13.376735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:50.501 [2024-11-27 21:52:13.376747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.835 ms 00:19:50.501 [2024-11-27 21:52:13.376762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.501 [2024-11-27 21:52:13.376911] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:50.501 [2024-11-27 21:52:13.377193] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:50.501 [2024-11-27 21:52:13.377223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.501 [2024-11-27 21:52:13.377234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:50.501 [2024-11-27 21:52:13.377244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:19:50.501 [2024-11-27 21:52:13.377255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.501 [2024-11-27 21:52:13.379059] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:50.501 [2024-11-27 21:52:13.383065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.501 [2024-11-27 21:52:13.383117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:50.501 [2024-11-27 21:52:13.383131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.002 ms 00:19:50.501 [2024-11-27 21:52:13.383139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.501 [2024-11-27 21:52:13.383222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.501 [2024-11-27 21:52:13.383232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:50.501 [2024-11-27 21:52:13.383246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:19:50.501 [2024-11-27 21:52:13.383259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.501 [2024-11-27 21:52:13.391717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.501 [2024-11-27 21:52:13.391761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:50.501 [2024-11-27 21:52:13.391776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.398 ms 00:19:50.501 [2024-11-27 21:52:13.391784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.501 [2024-11-27 21:52:13.391935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.501 [2024-11-27 21:52:13.391947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:50.501 [2024-11-27 21:52:13.391962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:19:50.501 [2024-11-27 21:52:13.391970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.501 [2024-11-27 21:52:13.392001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.501 [2024-11-27 21:52:13.392012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:50.501 [2024-11-27 21:52:13.392022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:50.501 [2024-11-27 21:52:13.392030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.501 [2024-11-27 21:52:13.392056] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:50.501 [2024-11-27 21:52:13.394164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.501 [2024-11-27 21:52:13.394356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:50.501 [2024-11-27 21:52:13.394377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.115 ms 00:19:50.501 [2024-11-27 21:52:13.394387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.501 [2024-11-27 21:52:13.394431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.502 [2024-11-27 21:52:13.394446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:50.502 [2024-11-27 21:52:13.394454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:50.502 [2024-11-27 21:52:13.394464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.502 [2024-11-27 21:52:13.394485] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:50.502 [2024-11-27 21:52:13.394508] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:50.502 [2024-11-27 21:52:13.394545] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:50.502 [2024-11-27 21:52:13.394565] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:50.502 [2024-11-27 21:52:13.394670] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:50.502 [2024-11-27 21:52:13.394684] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:50.502 [2024-11-27 21:52:13.394695] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:50.502 [2024-11-27 21:52:13.394712] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:50.502 [2024-11-27 21:52:13.394726] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:50.502 [2024-11-27 21:52:13.394738] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:50.502 [2024-11-27 21:52:13.394746] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:50.502 [2024-11-27 21:52:13.394758] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:50.502 [2024-11-27 21:52:13.394769] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:50.502 [2024-11-27 21:52:13.394779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.502 [2024-11-27 21:52:13.394787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:50.502 [2024-11-27 21:52:13.394797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:19:50.502 [2024-11-27 21:52:13.394805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.502 [2024-11-27 21:52:13.394893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.502 [2024-11-27 21:52:13.394903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:50.502 [2024-11-27 21:52:13.394915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:50.502 [2024-11-27 21:52:13.394923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.502 [2024-11-27 21:52:13.395033] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:50.502 [2024-11-27 21:52:13.395044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:50.502 [2024-11-27 21:52:13.395055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:50.502 [2024-11-27 21:52:13.395064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:50.502 [2024-11-27 21:52:13.395079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:50.502 [2024-11-27 21:52:13.395087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:50.502 [2024-11-27 21:52:13.395096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:50.502 [2024-11-27 21:52:13.395104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:50.502 [2024-11-27 21:52:13.395115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:50.502 [2024-11-27 21:52:13.395123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:50.502 [2024-11-27 21:52:13.395132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:50.502 [2024-11-27 21:52:13.395139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:50.502 [2024-11-27 21:52:13.395150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:50.502 [2024-11-27 21:52:13.395157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:50.502 [2024-11-27 21:52:13.395166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:50.502 [2024-11-27 21:52:13.395174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:50.502 [2024-11-27 21:52:13.395183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:50.502 [2024-11-27 21:52:13.395192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:50.502 [2024-11-27 21:52:13.395201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:50.502 [2024-11-27 21:52:13.395212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:50.502 [2024-11-27 21:52:13.395224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:50.502 [2024-11-27 21:52:13.395232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:50.502 [2024-11-27 21:52:13.395242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:50.502 [2024-11-27 21:52:13.395249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:50.502 [2024-11-27 21:52:13.395258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:50.502 [2024-11-27 21:52:13.395265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:50.502 [2024-11-27 21:52:13.395274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:50.502 [2024-11-27 21:52:13.395280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:50.502 [2024-11-27 21:52:13.395289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:50.502 [2024-11-27 21:52:13.395295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:50.502 [2024-11-27 21:52:13.395304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:50.502 [2024-11-27 21:52:13.395311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:50.502 [2024-11-27 21:52:13.395320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:50.502 [2024-11-27 21:52:13.395327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:50.502 [2024-11-27 21:52:13.395358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:50.502 [2024-11-27 21:52:13.395366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:50.502 [2024-11-27 21:52:13.395376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:50.502 [2024-11-27 21:52:13.395383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:50.502 [2024-11-27 21:52:13.395391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:50.502 [2024-11-27 21:52:13.395398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:50.502 [2024-11-27 21:52:13.395407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:50.502 [2024-11-27 21:52:13.395413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:50.502 [2024-11-27 21:52:13.395423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:50.502 [2024-11-27 21:52:13.395430] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:50.502 [2024-11-27 21:52:13.395440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:50.502 [2024-11-27 21:52:13.395449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:50.502 [2024-11-27 21:52:13.395459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:50.502 [2024-11-27 21:52:13.395467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:50.502 [2024-11-27 21:52:13.395476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:50.502 [2024-11-27 21:52:13.395490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:50.502 [2024-11-27 21:52:13.395499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:50.502 [2024-11-27 21:52:13.395507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:50.502 [2024-11-27 21:52:13.395518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:50.502 [2024-11-27 21:52:13.395526] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:50.502 [2024-11-27 21:52:13.395538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:50.502 [2024-11-27 21:52:13.395549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:50.502 [2024-11-27 21:52:13.395560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:50.502 [2024-11-27 21:52:13.395567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:50.502 [2024-11-27 21:52:13.395577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:50.502 [2024-11-27 21:52:13.395584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:50.502 [2024-11-27 21:52:13.395594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:50.502 [2024-11-27 21:52:13.395602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:50.502 [2024-11-27 21:52:13.395611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:50.502 [2024-11-27 21:52:13.395618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:50.502 [2024-11-27 21:52:13.395628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:50.502 [2024-11-27 21:52:13.395635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:50.502 [2024-11-27 21:52:13.395649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:50.502 [2024-11-27 21:52:13.395657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:50.502 [2024-11-27 21:52:13.395669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:50.502 [2024-11-27 21:52:13.395676] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:50.502 [2024-11-27 21:52:13.395685] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:50.502 [2024-11-27 21:52:13.395694] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:50.502 [2024-11-27 21:52:13.395703] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:50.502 [2024-11-27 21:52:13.395711] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:50.503 [2024-11-27 21:52:13.395720] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:50.503 [2024-11-27 21:52:13.395727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.503 [2024-11-27 21:52:13.395741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:50.503 [2024-11-27 21:52:13.395749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.765 ms 00:19:50.503 [2024-11-27 21:52:13.395757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.503 [2024-11-27 21:52:13.409711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.503 [2024-11-27 21:52:13.409889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:50.503 [2024-11-27 21:52:13.409909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.873 ms 00:19:50.503 [2024-11-27 21:52:13.409920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.503 [2024-11-27 21:52:13.410054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.503 [2024-11-27 21:52:13.410070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:50.503 [2024-11-27 21:52:13.410079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:50.503 [2024-11-27 21:52:13.410093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.503 [2024-11-27 21:52:13.422606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.503 [2024-11-27 21:52:13.422654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:50.503 [2024-11-27 21:52:13.422665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.491 ms 00:19:50.503 [2024-11-27 21:52:13.422679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.503 [2024-11-27 21:52:13.422747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.503 [2024-11-27 21:52:13.422759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:50.503 [2024-11-27 21:52:13.422768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:50.503 [2024-11-27 21:52:13.422778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.503 [2024-11-27 21:52:13.423251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.503 [2024-11-27 21:52:13.423285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:50.503 [2024-11-27 21:52:13.423296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.451 ms 00:19:50.503 [2024-11-27 21:52:13.423307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.503 [2024-11-27 21:52:13.423479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.503 [2024-11-27 21:52:13.423499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:50.503 [2024-11-27 21:52:13.423509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:19:50.503 [2024-11-27 21:52:13.423521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.503 [2024-11-27 21:52:13.431821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.503 [2024-11-27 21:52:13.431870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:50.503 [2024-11-27 21:52:13.431881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.276 ms 00:19:50.503 [2024-11-27 21:52:13.431891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.503 [2024-11-27 21:52:13.445526] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:50.503 [2024-11-27 21:52:13.445795] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:50.503 [2024-11-27 21:52:13.445823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.503 [2024-11-27 21:52:13.445840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:50.503 [2024-11-27 21:52:13.445855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.825 ms 00:19:50.503 [2024-11-27 21:52:13.445869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.503 [2024-11-27 21:52:13.463361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.503 [2024-11-27 21:52:13.463417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:50.503 [2024-11-27 21:52:13.463431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.424 ms 00:19:50.503 [2024-11-27 21:52:13.463444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.503 [2024-11-27 21:52:13.466235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.503 [2024-11-27 21:52:13.466291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:50.503 [2024-11-27 21:52:13.466302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.692 ms 00:19:50.503 [2024-11-27 21:52:13.466312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.503 [2024-11-27 21:52:13.468972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.503 [2024-11-27 21:52:13.469150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:50.503 [2024-11-27 21:52:13.469169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.582 ms 00:19:50.503 [2024-11-27 21:52:13.469179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.503 [2024-11-27 21:52:13.469586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.503 [2024-11-27 21:52:13.469606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:50.503 [2024-11-27 21:52:13.469632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:19:50.503 [2024-11-27 21:52:13.469642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.503 [2024-11-27 21:52:13.491715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.503 [2024-11-27 21:52:13.491938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:50.503 [2024-11-27 21:52:13.491958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.047 ms 00:19:50.503 [2024-11-27 21:52:13.491972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.503 [2024-11-27 21:52:13.499971] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:50.503 [2024-11-27 21:52:13.518310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.503 [2024-11-27 21:52:13.518374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:50.503 [2024-11-27 21:52:13.518389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.244 ms 00:19:50.503 [2024-11-27 21:52:13.518397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.503 [2024-11-27 21:52:13.518483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.503 [2024-11-27 21:52:13.518521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:50.503 [2024-11-27 21:52:13.518532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:50.503 [2024-11-27 21:52:13.518541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.503 [2024-11-27 21:52:13.518604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.503 [2024-11-27 21:52:13.518614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:50.503 [2024-11-27 21:52:13.518625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:50.503 [2024-11-27 21:52:13.518632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.503 [2024-11-27 21:52:13.518661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.503 [2024-11-27 21:52:13.518670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:50.503 [2024-11-27 21:52:13.518687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:50.503 [2024-11-27 21:52:13.518696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.503 [2024-11-27 21:52:13.518740] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:50.503 [2024-11-27 21:52:13.518751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.503 [2024-11-27 21:52:13.518761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:50.503 [2024-11-27 21:52:13.518769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:50.503 [2024-11-27 21:52:13.518778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.503 [2024-11-27 21:52:13.524523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.503 [2024-11-27 21:52:13.524577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:50.503 [2024-11-27 21:52:13.524589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.720 ms 00:19:50.503 [2024-11-27 21:52:13.524602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.503 [2024-11-27 21:52:13.524692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.503 [2024-11-27 21:52:13.524705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:50.503 [2024-11-27 21:52:13.524714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:50.503 [2024-11-27 21:52:13.524724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.503 [2024-11-27 21:52:13.525761] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:50.503 [2024-11-27 21:52:13.527086] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 151.673 ms, result 0 00:19:50.503 [2024-11-27 21:52:13.528701] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:50.503 Some configs were skipped because the RPC state that can call them passed over. 00:19:50.503 21:52:13 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:50.765 [2024-11-27 21:52:13.758827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.765 [2024-11-27 21:52:13.758888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:50.765 [2024-11-27 21:52:13.758911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.116 ms 00:19:50.765 [2024-11-27 21:52:13.758920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.765 [2024-11-27 21:52:13.758961] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.265 ms, result 0 00:19:50.765 true 00:19:50.765 21:52:13 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:51.026 [2024-11-27 21:52:13.979813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.026 [2024-11-27 21:52:13.979883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:51.026 [2024-11-27 21:52:13.979898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.994 ms 00:19:51.026 [2024-11-27 21:52:13.979907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.026 [2024-11-27 21:52:13.979947] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.129 ms, result 0 00:19:51.026 true 00:19:51.026 21:52:14 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 87630 00:19:51.026 21:52:14 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 87630 ']' 00:19:51.026 21:52:14 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 87630 00:19:51.026 21:52:14 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:51.026 21:52:14 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:51.026 21:52:14 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87630 00:19:51.026 killing process with pid 87630 00:19:51.026 21:52:14 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:51.026 21:52:14 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:51.026 21:52:14 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87630' 00:19:51.026 21:52:14 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 87630 00:19:51.026 21:52:14 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 87630 00:19:51.289 [2024-11-27 21:52:14.164731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.289 [2024-11-27 21:52:14.164789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:51.289 [2024-11-27 21:52:14.164805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:51.289 [2024-11-27 21:52:14.164814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.289 [2024-11-27 21:52:14.164840] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:51.289 [2024-11-27 21:52:14.165352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.289 [2024-11-27 21:52:14.165377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:51.289 [2024-11-27 21:52:14.165386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.498 ms 00:19:51.289 [2024-11-27 21:52:14.165396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.289 [2024-11-27 21:52:14.165705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.289 [2024-11-27 21:52:14.165719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:51.289 [2024-11-27 21:52:14.165729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:19:51.289 [2024-11-27 21:52:14.165739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.289 [2024-11-27 21:52:14.170641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.289 [2024-11-27 21:52:14.170678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:51.289 [2024-11-27 21:52:14.170689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.883 ms 00:19:51.289 [2024-11-27 21:52:14.170700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.289 [2024-11-27 21:52:14.177647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.289 [2024-11-27 21:52:14.177683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:51.289 [2024-11-27 21:52:14.177693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.911 ms 00:19:51.289 [2024-11-27 21:52:14.177707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.289 [2024-11-27 21:52:14.180099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.289 [2024-11-27 21:52:14.180238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:51.289 [2024-11-27 21:52:14.180253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.318 ms 00:19:51.289 [2024-11-27 21:52:14.180262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.289 [2024-11-27 21:52:14.184020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.289 [2024-11-27 21:52:14.184061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:51.289 [2024-11-27 21:52:14.184074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.722 ms 00:19:51.289 [2024-11-27 21:52:14.184083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.289 [2024-11-27 21:52:14.184211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.289 [2024-11-27 21:52:14.184226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:51.289 [2024-11-27 21:52:14.184235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:19:51.289 [2024-11-27 21:52:14.184244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.289 [2024-11-27 21:52:14.186396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.289 [2024-11-27 21:52:14.186528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:51.289 [2024-11-27 21:52:14.186542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.135 ms 00:19:51.289 [2024-11-27 21:52:14.186554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.289 [2024-11-27 21:52:14.188094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.289 [2024-11-27 21:52:14.188140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:51.289 [2024-11-27 21:52:14.188150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.505 ms 00:19:51.289 [2024-11-27 21:52:14.188158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.289 [2024-11-27 21:52:14.189489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.289 [2024-11-27 21:52:14.189529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:51.289 [2024-11-27 21:52:14.189538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.294 ms 00:19:51.289 [2024-11-27 21:52:14.189547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.289 [2024-11-27 21:52:14.190825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.289 [2024-11-27 21:52:14.190865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:51.289 [2024-11-27 21:52:14.190875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.215 ms 00:19:51.289 [2024-11-27 21:52:14.190883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.289 [2024-11-27 21:52:14.190918] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:51.289 [2024-11-27 21:52:14.190933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:51.289 [2024-11-27 21:52:14.190943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:51.289 [2024-11-27 21:52:14.190955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:51.289 [2024-11-27 21:52:14.190962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.190972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.190979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.190988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.190996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-11-27 21:52:14.191577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:51.291 [2024-11-27 21:52:14.191586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:51.291 [2024-11-27 21:52:14.191593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:51.291 [2024-11-27 21:52:14.191603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:51.291 [2024-11-27 21:52:14.191610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:51.291 [2024-11-27 21:52:14.191619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:51.291 [2024-11-27 21:52:14.191626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:51.291 [2024-11-27 21:52:14.191636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:51.291 [2024-11-27 21:52:14.191643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:51.291 [2024-11-27 21:52:14.191652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:51.291 [2024-11-27 21:52:14.191660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:51.291 [2024-11-27 21:52:14.191672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:51.291 [2024-11-27 21:52:14.191679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:51.291 [2024-11-27 21:52:14.191688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:51.291 [2024-11-27 21:52:14.191696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:51.291 [2024-11-27 21:52:14.191705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:51.291 [2024-11-27 21:52:14.191712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:51.291 [2024-11-27 21:52:14.191722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:51.291 [2024-11-27 21:52:14.191730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:51.291 [2024-11-27 21:52:14.191738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:51.291 [2024-11-27 21:52:14.191745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:51.291 [2024-11-27 21:52:14.191755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:51.291 [2024-11-27 21:52:14.191763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:51.291 [2024-11-27 21:52:14.191772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:51.291 [2024-11-27 21:52:14.191779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:51.291 [2024-11-27 21:52:14.191789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:51.291 [2024-11-27 21:52:14.191796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:51.291 [2024-11-27 21:52:14.191807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:51.291 [2024-11-27 21:52:14.191815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:51.291 [2024-11-27 21:52:14.191832] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:51.291 [2024-11-27 21:52:14.191841] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 78ee2df8-80e5-4b2c-bba3-34ec78c869a1 00:19:51.291 [2024-11-27 21:52:14.191853] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:51.291 [2024-11-27 21:52:14.191860] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:51.291 [2024-11-27 21:52:14.191869] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:51.291 [2024-11-27 21:52:14.191876] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:51.291 [2024-11-27 21:52:14.191885] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:51.291 [2024-11-27 21:52:14.191895] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:51.291 [2024-11-27 21:52:14.191905] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:51.291 [2024-11-27 21:52:14.191911] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:51.291 [2024-11-27 21:52:14.191920] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:51.291 [2024-11-27 21:52:14.191927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.291 [2024-11-27 21:52:14.191935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:51.291 [2024-11-27 21:52:14.191944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.010 ms 00:19:51.291 [2024-11-27 21:52:14.191956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.291 [2024-11-27 21:52:14.193639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.291 [2024-11-27 21:52:14.193667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:51.291 [2024-11-27 21:52:14.193677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.663 ms 00:19:51.291 [2024-11-27 21:52:14.193686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.291 [2024-11-27 21:52:14.193777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.291 [2024-11-27 21:52:14.193788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:51.291 [2024-11-27 21:52:14.193796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:19:51.291 [2024-11-27 21:52:14.193805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.291 [2024-11-27 21:52:14.199769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.291 [2024-11-27 21:52:14.199811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:51.291 [2024-11-27 21:52:14.199820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.291 [2024-11-27 21:52:14.199829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.291 [2024-11-27 21:52:14.199907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.291 [2024-11-27 21:52:14.199918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:51.291 [2024-11-27 21:52:14.199925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.291 [2024-11-27 21:52:14.199937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.291 [2024-11-27 21:52:14.199977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.291 [2024-11-27 21:52:14.199988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:51.291 [2024-11-27 21:52:14.199996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.291 [2024-11-27 21:52:14.200005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.291 [2024-11-27 21:52:14.200022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.291 [2024-11-27 21:52:14.200032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:51.291 [2024-11-27 21:52:14.200040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.291 [2024-11-27 21:52:14.200049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.291 [2024-11-27 21:52:14.210666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.291 [2024-11-27 21:52:14.210711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:51.291 [2024-11-27 21:52:14.210722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.291 [2024-11-27 21:52:14.210738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.291 [2024-11-27 21:52:14.218889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.291 [2024-11-27 21:52:14.218931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:51.291 [2024-11-27 21:52:14.218941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.291 [2024-11-27 21:52:14.218953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.291 [2024-11-27 21:52:14.218996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.292 [2024-11-27 21:52:14.219009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:51.292 [2024-11-27 21:52:14.219017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.292 [2024-11-27 21:52:14.219027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.292 [2024-11-27 21:52:14.219059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.292 [2024-11-27 21:52:14.219069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:51.292 [2024-11-27 21:52:14.219077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.292 [2024-11-27 21:52:14.219086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.292 [2024-11-27 21:52:14.219150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.292 [2024-11-27 21:52:14.219164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:51.292 [2024-11-27 21:52:14.219172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.292 [2024-11-27 21:52:14.219181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.292 [2024-11-27 21:52:14.219214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.292 [2024-11-27 21:52:14.219224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:51.292 [2024-11-27 21:52:14.219236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.292 [2024-11-27 21:52:14.219250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.292 [2024-11-27 21:52:14.219287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.292 [2024-11-27 21:52:14.219299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:51.292 [2024-11-27 21:52:14.219307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.292 [2024-11-27 21:52:14.219316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.292 [2024-11-27 21:52:14.219372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.292 [2024-11-27 21:52:14.219385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:51.292 [2024-11-27 21:52:14.219393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.292 [2024-11-27 21:52:14.219402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.292 [2024-11-27 21:52:14.219537] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 54.781 ms, result 0 00:19:51.292 21:52:14 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:51.553 [2024-11-27 21:52:14.474483] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:19:51.553 [2024-11-27 21:52:14.474790] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87666 ] 00:19:51.553 [2024-11-27 21:52:14.622101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.553 [2024-11-27 21:52:14.650606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.814 [2024-11-27 21:52:14.765386] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:51.814 [2024-11-27 21:52:14.765488] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:51.814 [2024-11-27 21:52:14.925976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.814 [2024-11-27 21:52:14.926042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:51.814 [2024-11-27 21:52:14.926058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:51.814 [2024-11-27 21:52:14.926066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.814 [2024-11-27 21:52:14.928662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.814 [2024-11-27 21:52:14.928861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:51.814 [2024-11-27 21:52:14.928882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.574 ms 00:19:51.814 [2024-11-27 21:52:14.928897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.814 [2024-11-27 21:52:14.929167] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:51.814 [2024-11-27 21:52:14.929501] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:51.814 [2024-11-27 21:52:14.929533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.814 [2024-11-27 21:52:14.929543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:51.814 [2024-11-27 21:52:14.929555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.399 ms 00:19:51.814 [2024-11-27 21:52:14.929564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.814 [2024-11-27 21:52:14.932632] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:52.076 [2024-11-27 21:52:14.936531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.076 [2024-11-27 21:52:14.936584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:52.076 [2024-11-27 21:52:14.936603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.902 ms 00:19:52.076 [2024-11-27 21:52:14.936611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.076 [2024-11-27 21:52:14.936693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.076 [2024-11-27 21:52:14.936704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:52.076 [2024-11-27 21:52:14.936714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:19:52.076 [2024-11-27 21:52:14.936721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.076 [2024-11-27 21:52:14.945003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.076 [2024-11-27 21:52:14.945046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:52.076 [2024-11-27 21:52:14.945058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.230 ms 00:19:52.076 [2024-11-27 21:52:14.945066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.076 [2024-11-27 21:52:14.945213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.076 [2024-11-27 21:52:14.945225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:52.076 [2024-11-27 21:52:14.945234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:19:52.076 [2024-11-27 21:52:14.945245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.076 [2024-11-27 21:52:14.945273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.076 [2024-11-27 21:52:14.945282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:52.076 [2024-11-27 21:52:14.945290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:52.076 [2024-11-27 21:52:14.945297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.076 [2024-11-27 21:52:14.945321] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:52.076 [2024-11-27 21:52:14.947481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.076 [2024-11-27 21:52:14.947656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:52.076 [2024-11-27 21:52:14.947674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.168 ms 00:19:52.076 [2024-11-27 21:52:14.947689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.076 [2024-11-27 21:52:14.947738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.076 [2024-11-27 21:52:14.947747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:52.076 [2024-11-27 21:52:14.947756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:52.076 [2024-11-27 21:52:14.947764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.076 [2024-11-27 21:52:14.947782] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:52.076 [2024-11-27 21:52:14.947802] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:52.076 [2024-11-27 21:52:14.947849] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:52.076 [2024-11-27 21:52:14.947867] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:52.076 [2024-11-27 21:52:14.947974] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:52.076 [2024-11-27 21:52:14.947985] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:52.076 [2024-11-27 21:52:14.947996] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:52.076 [2024-11-27 21:52:14.948007] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:52.076 [2024-11-27 21:52:14.948016] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:52.077 [2024-11-27 21:52:14.948024] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:52.077 [2024-11-27 21:52:14.948033] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:52.077 [2024-11-27 21:52:14.948041] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:52.077 [2024-11-27 21:52:14.948051] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:52.077 [2024-11-27 21:52:14.948061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.077 [2024-11-27 21:52:14.948073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:52.077 [2024-11-27 21:52:14.948081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:19:52.077 [2024-11-27 21:52:14.948091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.077 [2024-11-27 21:52:14.948178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.077 [2024-11-27 21:52:14.948187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:52.077 [2024-11-27 21:52:14.948200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:52.077 [2024-11-27 21:52:14.948211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.077 [2024-11-27 21:52:14.948315] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:52.077 [2024-11-27 21:52:14.948329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:52.077 [2024-11-27 21:52:14.948365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:52.077 [2024-11-27 21:52:14.948375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.077 [2024-11-27 21:52:14.948384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:52.077 [2024-11-27 21:52:14.948391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:52.077 [2024-11-27 21:52:14.948399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:52.077 [2024-11-27 21:52:14.948413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:52.077 [2024-11-27 21:52:14.948423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:52.077 [2024-11-27 21:52:14.948432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:52.077 [2024-11-27 21:52:14.948440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:52.077 [2024-11-27 21:52:14.948448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:52.077 [2024-11-27 21:52:14.948456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:52.077 [2024-11-27 21:52:14.948465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:52.077 [2024-11-27 21:52:14.948472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:52.077 [2024-11-27 21:52:14.948481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.077 [2024-11-27 21:52:14.948488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:52.077 [2024-11-27 21:52:14.948498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:52.077 [2024-11-27 21:52:14.948506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.077 [2024-11-27 21:52:14.948515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:52.077 [2024-11-27 21:52:14.948523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:52.077 [2024-11-27 21:52:14.948531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:52.077 [2024-11-27 21:52:14.948539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:52.077 [2024-11-27 21:52:14.948551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:52.077 [2024-11-27 21:52:14.948560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:52.077 [2024-11-27 21:52:14.948568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:52.077 [2024-11-27 21:52:14.948575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:52.077 [2024-11-27 21:52:14.948582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:52.077 [2024-11-27 21:52:14.948588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:52.077 [2024-11-27 21:52:14.948595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:52.077 [2024-11-27 21:52:14.948602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:52.077 [2024-11-27 21:52:14.948609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:52.077 [2024-11-27 21:52:14.948615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:52.077 [2024-11-27 21:52:14.948622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:52.077 [2024-11-27 21:52:14.948629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:52.077 [2024-11-27 21:52:14.948636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:52.077 [2024-11-27 21:52:14.948643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:52.077 [2024-11-27 21:52:14.948650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:52.077 [2024-11-27 21:52:14.948656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:52.077 [2024-11-27 21:52:14.948665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.077 [2024-11-27 21:52:14.948672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:52.077 [2024-11-27 21:52:14.948679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:52.077 [2024-11-27 21:52:14.948685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.077 [2024-11-27 21:52:14.948692] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:52.077 [2024-11-27 21:52:14.948700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:52.077 [2024-11-27 21:52:14.948708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:52.077 [2024-11-27 21:52:14.948715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.077 [2024-11-27 21:52:14.948723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:52.077 [2024-11-27 21:52:14.948730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:52.077 [2024-11-27 21:52:14.948737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:52.077 [2024-11-27 21:52:14.948744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:52.077 [2024-11-27 21:52:14.948751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:52.077 [2024-11-27 21:52:14.948758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:52.077 [2024-11-27 21:52:14.948766] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:52.077 [2024-11-27 21:52:14.948776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:52.077 [2024-11-27 21:52:14.948786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:52.077 [2024-11-27 21:52:14.948794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:52.077 [2024-11-27 21:52:14.948801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:52.077 [2024-11-27 21:52:14.948809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:52.077 [2024-11-27 21:52:14.948816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:52.077 [2024-11-27 21:52:14.948823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:52.077 [2024-11-27 21:52:14.948831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:52.077 [2024-11-27 21:52:14.948844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:52.077 [2024-11-27 21:52:14.948851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:52.077 [2024-11-27 21:52:14.948858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:52.077 [2024-11-27 21:52:14.948867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:52.077 [2024-11-27 21:52:14.948874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:52.077 [2024-11-27 21:52:14.948881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:52.077 [2024-11-27 21:52:14.948888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:52.077 [2024-11-27 21:52:14.948895] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:52.077 [2024-11-27 21:52:14.948906] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:52.077 [2024-11-27 21:52:14.948917] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:52.077 [2024-11-27 21:52:14.948924] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:52.077 [2024-11-27 21:52:14.948931] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:52.077 [2024-11-27 21:52:14.948938] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:52.077 [2024-11-27 21:52:14.948946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.077 [2024-11-27 21:52:14.948954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:52.077 [2024-11-27 21:52:14.948962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.700 ms 00:19:52.077 [2024-11-27 21:52:14.948969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.077 [2024-11-27 21:52:14.963160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.077 [2024-11-27 21:52:14.963328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:52.077 [2024-11-27 21:52:14.963360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.138 ms 00:19:52.077 [2024-11-27 21:52:14.963369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.077 [2024-11-27 21:52:14.963503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.077 [2024-11-27 21:52:14.963521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:52.077 [2024-11-27 21:52:14.963530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:52.077 [2024-11-27 21:52:14.963537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.078 [2024-11-27 21:52:14.986546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.078 [2024-11-27 21:52:14.986610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:52.078 [2024-11-27 21:52:14.986638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.983 ms 00:19:52.078 [2024-11-27 21:52:14.986651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.078 [2024-11-27 21:52:14.986786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.078 [2024-11-27 21:52:14.986805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:52.078 [2024-11-27 21:52:14.986819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:52.078 [2024-11-27 21:52:14.986831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.078 [2024-11-27 21:52:14.987403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.078 [2024-11-27 21:52:14.987439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:52.078 [2024-11-27 21:52:14.987456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:19:52.078 [2024-11-27 21:52:14.987470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.078 [2024-11-27 21:52:14.987683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.078 [2024-11-27 21:52:14.987714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:52.078 [2024-11-27 21:52:14.987728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:19:52.078 [2024-11-27 21:52:14.987740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.078 [2024-11-27 21:52:14.996029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.078 [2024-11-27 21:52:14.996076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:52.078 [2024-11-27 21:52:14.996092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.255 ms 00:19:52.078 [2024-11-27 21:52:14.996100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.078 [2024-11-27 21:52:14.999986] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:52.078 [2024-11-27 21:52:15.000038] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:52.078 [2024-11-27 21:52:15.000051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.078 [2024-11-27 21:52:15.000059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:52.078 [2024-11-27 21:52:15.000069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.841 ms 00:19:52.078 [2024-11-27 21:52:15.000077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.078 [2024-11-27 21:52:15.015775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.078 [2024-11-27 21:52:15.015822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:52.078 [2024-11-27 21:52:15.015835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.614 ms 00:19:52.078 [2024-11-27 21:52:15.015843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.078 [2024-11-27 21:52:15.018863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.078 [2024-11-27 21:52:15.018912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:52.078 [2024-11-27 21:52:15.018922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.924 ms 00:19:52.078 [2024-11-27 21:52:15.018930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.078 [2024-11-27 21:52:15.021805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.078 [2024-11-27 21:52:15.021989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:52.078 [2024-11-27 21:52:15.022007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.809 ms 00:19:52.078 [2024-11-27 21:52:15.022015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.078 [2024-11-27 21:52:15.022386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.078 [2024-11-27 21:52:15.022404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:52.078 [2024-11-27 21:52:15.022414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:19:52.078 [2024-11-27 21:52:15.022422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.078 [2024-11-27 21:52:15.045453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.078 [2024-11-27 21:52:15.045510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:52.078 [2024-11-27 21:52:15.045523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.006 ms 00:19:52.078 [2024-11-27 21:52:15.045532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.078 [2024-11-27 21:52:15.053744] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:52.078 [2024-11-27 21:52:15.072978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.078 [2024-11-27 21:52:15.073209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:52.078 [2024-11-27 21:52:15.073230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.341 ms 00:19:52.078 [2024-11-27 21:52:15.073240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.078 [2024-11-27 21:52:15.073379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.078 [2024-11-27 21:52:15.073393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:52.078 [2024-11-27 21:52:15.073406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:52.078 [2024-11-27 21:52:15.073414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.078 [2024-11-27 21:52:15.073475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.078 [2024-11-27 21:52:15.073485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:52.078 [2024-11-27 21:52:15.073493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:52.078 [2024-11-27 21:52:15.073501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.078 [2024-11-27 21:52:15.073526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.078 [2024-11-27 21:52:15.073535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:52.078 [2024-11-27 21:52:15.073543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:52.078 [2024-11-27 21:52:15.073555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.078 [2024-11-27 21:52:15.073597] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:52.078 [2024-11-27 21:52:15.073635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.078 [2024-11-27 21:52:15.073644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:52.078 [2024-11-27 21:52:15.073653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:52.078 [2024-11-27 21:52:15.073661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.078 [2024-11-27 21:52:15.079638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.078 [2024-11-27 21:52:15.079796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:52.078 [2024-11-27 21:52:15.079814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.953 ms 00:19:52.078 [2024-11-27 21:52:15.079822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.078 [2024-11-27 21:52:15.079915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.078 [2024-11-27 21:52:15.079926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:52.078 [2024-11-27 21:52:15.079941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:52.078 [2024-11-27 21:52:15.079949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.078 [2024-11-27 21:52:15.081083] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:52.078 [2024-11-27 21:52:15.082472] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 154.795 ms, result 0 00:19:52.078 [2024-11-27 21:52:15.083515] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:52.078 [2024-11-27 21:52:15.091125] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:53.462  [2024-11-27T21:52:17.155Z] Copying: 21/256 [MB] (21 MBps) [2024-11-27T21:52:18.539Z] Copying: 35/256 [MB] (13 MBps) [2024-11-27T21:52:19.483Z] Copying: 58/256 [MB] (22 MBps) [2024-11-27T21:52:20.430Z] Copying: 76/256 [MB] (17 MBps) [2024-11-27T21:52:21.375Z] Copying: 96/256 [MB] (20 MBps) [2024-11-27T21:52:22.320Z] Copying: 117/256 [MB] (20 MBps) [2024-11-27T21:52:23.266Z] Copying: 130/256 [MB] (12 MBps) [2024-11-27T21:52:24.210Z] Copying: 143/256 [MB] (13 MBps) [2024-11-27T21:52:25.156Z] Copying: 159/256 [MB] (15 MBps) [2024-11-27T21:52:26.543Z] Copying: 173/256 [MB] (13 MBps) [2024-11-27T21:52:27.488Z] Copying: 184/256 [MB] (11 MBps) [2024-11-27T21:52:28.431Z] Copying: 196/256 [MB] (11 MBps) [2024-11-27T21:52:29.377Z] Copying: 207/256 [MB] (11 MBps) [2024-11-27T21:52:30.323Z] Copying: 218/256 [MB] (11 MBps) [2024-11-27T21:52:31.269Z] Copying: 229/256 [MB] (11 MBps) [2024-11-27T21:52:32.212Z] Copying: 240/256 [MB] (10 MBps) [2024-11-27T21:52:32.787Z] Copying: 251/256 [MB] (10 MBps) [2024-11-27T21:52:32.787Z] Copying: 256/256 [MB] (average 14 MBps)[2024-11-27 21:52:32.722717] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:09.666 [2024-11-27 21:52:32.725876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.666 [2024-11-27 21:52:32.726131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:09.666 [2024-11-27 21:52:32.726266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:09.666 [2024-11-27 21:52:32.726294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.666 [2024-11-27 21:52:32.726424] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:09.666 [2024-11-27 21:52:32.727251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.666 [2024-11-27 21:52:32.727301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:09.666 [2024-11-27 21:52:32.727323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.791 ms 00:20:09.666 [2024-11-27 21:52:32.727375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.666 [2024-11-27 21:52:32.727974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.666 [2024-11-27 21:52:32.728007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:09.666 [2024-11-27 21:52:32.728032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:20:09.666 [2024-11-27 21:52:32.728048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.666 [2024-11-27 21:52:32.735185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.666 [2024-11-27 21:52:32.735213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:09.666 [2024-11-27 21:52:32.735224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.103 ms 00:20:09.666 [2024-11-27 21:52:32.735232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.666 [2024-11-27 21:52:32.742293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.666 [2024-11-27 21:52:32.742476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:09.666 [2024-11-27 21:52:32.742497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.022 ms 00:20:09.666 [2024-11-27 21:52:32.742513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.666 [2024-11-27 21:52:32.745377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.666 [2024-11-27 21:52:32.745423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:09.666 [2024-11-27 21:52:32.745433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.807 ms 00:20:09.666 [2024-11-27 21:52:32.745441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.666 [2024-11-27 21:52:32.750238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.666 [2024-11-27 21:52:32.750291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:09.666 [2024-11-27 21:52:32.750311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.736 ms 00:20:09.666 [2024-11-27 21:52:32.750320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.666 [2024-11-27 21:52:32.750477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.666 [2024-11-27 21:52:32.750489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:09.666 [2024-11-27 21:52:32.750502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:20:09.666 [2024-11-27 21:52:32.750510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.666 [2024-11-27 21:52:32.753926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.666 [2024-11-27 21:52:32.753973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:09.666 [2024-11-27 21:52:32.753983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.397 ms 00:20:09.666 [2024-11-27 21:52:32.753991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.666 [2024-11-27 21:52:32.756751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.666 [2024-11-27 21:52:32.756913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:09.666 [2024-11-27 21:52:32.756930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.716 ms 00:20:09.666 [2024-11-27 21:52:32.756937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.666 [2024-11-27 21:52:32.759611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.666 [2024-11-27 21:52:32.759658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:09.666 [2024-11-27 21:52:32.759668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.553 ms 00:20:09.666 [2024-11-27 21:52:32.759676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.666 [2024-11-27 21:52:32.761838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.666 [2024-11-27 21:52:32.761883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:09.666 [2024-11-27 21:52:32.761893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.084 ms 00:20:09.666 [2024-11-27 21:52:32.761900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.666 [2024-11-27 21:52:32.761942] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:09.666 [2024-11-27 21:52:32.761958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:09.666 [2024-11-27 21:52:32.761968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:09.666 [2024-11-27 21:52:32.761977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:09.666 [2024-11-27 21:52:32.761984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:09.666 [2024-11-27 21:52:32.761992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:09.666 [2024-11-27 21:52:32.762000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:09.666 [2024-11-27 21:52:32.762008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:09.666 [2024-11-27 21:52:32.762015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:09.666 [2024-11-27 21:52:32.762023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:09.666 [2024-11-27 21:52:32.762032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:09.666 [2024-11-27 21:52:32.762039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:09.666 [2024-11-27 21:52:32.762047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:09.666 [2024-11-27 21:52:32.762054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:09.666 [2024-11-27 21:52:32.762062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:09.666 [2024-11-27 21:52:32.762069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:09.666 [2024-11-27 21:52:32.762077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:09.666 [2024-11-27 21:52:32.762084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:09.666 [2024-11-27 21:52:32.762091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:09.667 [2024-11-27 21:52:32.762791] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:09.667 [2024-11-27 21:52:32.762800] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 78ee2df8-80e5-4b2c-bba3-34ec78c869a1 00:20:09.667 [2024-11-27 21:52:32.762809] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:09.667 [2024-11-27 21:52:32.762821] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:09.667 [2024-11-27 21:52:32.762828] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:09.667 [2024-11-27 21:52:32.762836] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:09.667 [2024-11-27 21:52:32.762847] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:09.667 [2024-11-27 21:52:32.762858] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:09.667 [2024-11-27 21:52:32.762865] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:09.668 [2024-11-27 21:52:32.762872] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:09.668 [2024-11-27 21:52:32.762879] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:09.668 [2024-11-27 21:52:32.762887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.668 [2024-11-27 21:52:32.762896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:09.668 [2024-11-27 21:52:32.762906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.946 ms 00:20:09.668 [2024-11-27 21:52:32.762914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.668 [2024-11-27 21:52:32.765102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.668 [2024-11-27 21:52:32.765140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:09.668 [2024-11-27 21:52:32.765151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.169 ms 00:20:09.668 [2024-11-27 21:52:32.765163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.668 [2024-11-27 21:52:32.765277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.668 [2024-11-27 21:52:32.765287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:09.668 [2024-11-27 21:52:32.765298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:20:09.668 [2024-11-27 21:52:32.765306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.668 [2024-11-27 21:52:32.772912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.668 [2024-11-27 21:52:32.772959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:09.668 [2024-11-27 21:52:32.772969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.668 [2024-11-27 21:52:32.772983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.668 [2024-11-27 21:52:32.773067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.668 [2024-11-27 21:52:32.773077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:09.668 [2024-11-27 21:52:32.773085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.668 [2024-11-27 21:52:32.773092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.668 [2024-11-27 21:52:32.773148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.668 [2024-11-27 21:52:32.773158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:09.668 [2024-11-27 21:52:32.773166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.668 [2024-11-27 21:52:32.773174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.668 [2024-11-27 21:52:32.773197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.668 [2024-11-27 21:52:32.773206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:09.668 [2024-11-27 21:52:32.773219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.668 [2024-11-27 21:52:32.773227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.929 [2024-11-27 21:52:32.786465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.929 [2024-11-27 21:52:32.786658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:09.929 [2024-11-27 21:52:32.786678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.929 [2024-11-27 21:52:32.786698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.929 [2024-11-27 21:52:32.796960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.929 [2024-11-27 21:52:32.797127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:09.929 [2024-11-27 21:52:32.797143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.929 [2024-11-27 21:52:32.797152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.929 [2024-11-27 21:52:32.797200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.929 [2024-11-27 21:52:32.797210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:09.929 [2024-11-27 21:52:32.797218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.929 [2024-11-27 21:52:32.797227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.929 [2024-11-27 21:52:32.797258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.929 [2024-11-27 21:52:32.797273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:09.929 [2024-11-27 21:52:32.797281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.929 [2024-11-27 21:52:32.797290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.929 [2024-11-27 21:52:32.797555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.929 [2024-11-27 21:52:32.797605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:09.929 [2024-11-27 21:52:32.797637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.929 [2024-11-27 21:52:32.797657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.929 [2024-11-27 21:52:32.797721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.929 [2024-11-27 21:52:32.797748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:09.929 [2024-11-27 21:52:32.797768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.929 [2024-11-27 21:52:32.797787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.929 [2024-11-27 21:52:32.797833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.929 [2024-11-27 21:52:32.797843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:09.929 [2024-11-27 21:52:32.797852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.929 [2024-11-27 21:52:32.797860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.929 [2024-11-27 21:52:32.797909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:09.929 [2024-11-27 21:52:32.797922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:09.929 [2024-11-27 21:52:32.797931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:09.929 [2024-11-27 21:52:32.797939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.929 [2024-11-27 21:52:32.798087] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 72.199 ms, result 0 00:20:09.929 00:20:09.929 00:20:09.929 21:52:32 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:10.502 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:20:10.502 21:52:33 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:20:10.502 21:52:33 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:20:10.502 21:52:33 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:10.502 21:52:33 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:10.502 21:52:33 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:20:10.502 21:52:33 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:10.764 21:52:33 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 87630 00:20:10.764 Process with pid 87630 is not found 00:20:10.764 21:52:33 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 87630 ']' 00:20:10.764 21:52:33 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 87630 00:20:10.764 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (87630) - No such process 00:20:10.764 21:52:33 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 87630 is not found' 00:20:10.764 ************************************ 00:20:10.764 END TEST ftl_trim 00:20:10.764 ************************************ 00:20:10.764 00:20:10.764 real 1m4.339s 00:20:10.764 user 1m24.572s 00:20:10.764 sys 0m4.870s 00:20:10.764 21:52:33 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:10.764 21:52:33 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:10.764 21:52:33 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:10.764 21:52:33 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:10.764 21:52:33 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:10.764 21:52:33 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:10.764 ************************************ 00:20:10.764 START TEST ftl_restore 00:20:10.764 ************************************ 00:20:10.764 21:52:33 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:10.764 * Looking for test storage... 00:20:10.764 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:10.764 21:52:33 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:10.764 21:52:33 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:20:10.764 21:52:33 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:10.764 21:52:33 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:10.764 21:52:33 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:10.764 21:52:33 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:10.764 21:52:33 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:10.764 21:52:33 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:20:10.764 21:52:33 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:20:10.764 21:52:33 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:20:10.764 21:52:33 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:20:10.764 21:52:33 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:20:10.764 21:52:33 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:20:10.764 21:52:33 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:20:10.764 21:52:33 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:10.764 21:52:33 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:20:10.764 21:52:33 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:20:10.764 21:52:33 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:10.764 21:52:33 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:10.764 21:52:33 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:20:10.764 21:52:33 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:20:10.764 21:52:33 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:10.764 21:52:33 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:20:10.764 21:52:33 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:20:10.764 21:52:33 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:20:10.764 21:52:33 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:20:10.764 21:52:33 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:10.764 21:52:33 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:20:10.764 21:52:33 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:20:10.764 21:52:33 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:10.764 21:52:33 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:10.764 21:52:33 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:20:10.764 21:52:33 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:10.764 21:52:33 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:10.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:10.764 --rc genhtml_branch_coverage=1 00:20:10.764 --rc genhtml_function_coverage=1 00:20:10.764 --rc genhtml_legend=1 00:20:10.764 --rc geninfo_all_blocks=1 00:20:10.764 --rc geninfo_unexecuted_blocks=1 00:20:10.764 00:20:10.764 ' 00:20:10.764 21:52:33 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:10.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:10.764 --rc genhtml_branch_coverage=1 00:20:10.764 --rc genhtml_function_coverage=1 00:20:10.764 --rc genhtml_legend=1 00:20:10.764 --rc geninfo_all_blocks=1 00:20:10.764 --rc geninfo_unexecuted_blocks=1 00:20:10.764 00:20:10.764 ' 00:20:10.764 21:52:33 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:10.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:10.764 --rc genhtml_branch_coverage=1 00:20:10.764 --rc genhtml_function_coverage=1 00:20:10.764 --rc genhtml_legend=1 00:20:10.764 --rc geninfo_all_blocks=1 00:20:10.764 --rc geninfo_unexecuted_blocks=1 00:20:10.764 00:20:10.764 ' 00:20:10.764 21:52:33 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:10.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:10.764 --rc genhtml_branch_coverage=1 00:20:10.764 --rc genhtml_function_coverage=1 00:20:10.764 --rc genhtml_legend=1 00:20:10.764 --rc geninfo_all_blocks=1 00:20:10.764 --rc geninfo_unexecuted_blocks=1 00:20:10.764 00:20:10.764 ' 00:20:10.764 21:52:33 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:10.764 21:52:33 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:20:11.026 21:52:33 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:11.026 21:52:33 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:11.026 21:52:33 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:11.026 21:52:33 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:11.026 21:52:33 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:11.026 21:52:33 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:11.026 21:52:33 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:11.026 21:52:33 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:11.026 21:52:33 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:11.026 21:52:33 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:11.026 21:52:33 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:11.026 21:52:33 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:11.026 21:52:33 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:11.026 21:52:33 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:11.026 21:52:33 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:11.026 21:52:33 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:11.026 21:52:33 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:11.026 21:52:33 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:11.026 21:52:33 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:11.026 21:52:33 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:11.026 21:52:33 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:11.026 21:52:33 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:11.026 21:52:33 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:11.026 21:52:33 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:11.026 21:52:33 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:11.026 21:52:33 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:11.026 21:52:33 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:11.026 21:52:33 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:11.026 21:52:33 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:20:11.026 21:52:33 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.7L4JLJvNul 00:20:11.026 21:52:33 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:11.026 21:52:33 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:20:11.026 21:52:33 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:20:11.026 21:52:33 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:11.026 21:52:33 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:20:11.026 21:52:33 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:20:11.026 21:52:33 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:20:11.026 21:52:33 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:11.026 21:52:33 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=87936 00:20:11.026 21:52:33 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 87936 00:20:11.026 21:52:33 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:11.026 21:52:33 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 87936 ']' 00:20:11.026 21:52:33 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.026 21:52:33 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:11.026 21:52:33 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.026 21:52:33 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:11.026 21:52:33 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:20:11.026 [2024-11-27 21:52:33.989197] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:20:11.026 [2024-11-27 21:52:33.989657] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87936 ] 00:20:11.026 [2024-11-27 21:52:34.133794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.288 [2024-11-27 21:52:34.163533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.860 21:52:34 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:11.860 21:52:34 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:20:11.860 21:52:34 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:11.860 21:52:34 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:20:11.860 21:52:34 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:11.860 21:52:34 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:20:11.860 21:52:34 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:20:11.860 21:52:34 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:12.122 21:52:35 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:12.122 21:52:35 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:20:12.122 21:52:35 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:12.122 21:52:35 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:12.122 21:52:35 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:12.122 21:52:35 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:12.122 21:52:35 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:12.122 21:52:35 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:12.383 21:52:35 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:12.383 { 00:20:12.383 "name": "nvme0n1", 00:20:12.383 "aliases": [ 00:20:12.383 "7ecce0dc-90ed-46bb-8f52-a68742333698" 00:20:12.383 ], 00:20:12.383 "product_name": "NVMe disk", 00:20:12.383 "block_size": 4096, 00:20:12.383 "num_blocks": 1310720, 00:20:12.383 "uuid": "7ecce0dc-90ed-46bb-8f52-a68742333698", 00:20:12.383 "numa_id": -1, 00:20:12.383 "assigned_rate_limits": { 00:20:12.384 "rw_ios_per_sec": 0, 00:20:12.384 "rw_mbytes_per_sec": 0, 00:20:12.384 "r_mbytes_per_sec": 0, 00:20:12.384 "w_mbytes_per_sec": 0 00:20:12.384 }, 00:20:12.384 "claimed": true, 00:20:12.384 "claim_type": "read_many_write_one", 00:20:12.384 "zoned": false, 00:20:12.384 "supported_io_types": { 00:20:12.384 "read": true, 00:20:12.384 "write": true, 00:20:12.384 "unmap": true, 00:20:12.384 "flush": true, 00:20:12.384 "reset": true, 00:20:12.384 "nvme_admin": true, 00:20:12.384 "nvme_io": true, 00:20:12.384 "nvme_io_md": false, 00:20:12.384 "write_zeroes": true, 00:20:12.384 "zcopy": false, 00:20:12.384 "get_zone_info": false, 00:20:12.384 "zone_management": false, 00:20:12.384 "zone_append": false, 00:20:12.384 "compare": true, 00:20:12.384 "compare_and_write": false, 00:20:12.384 "abort": true, 00:20:12.384 "seek_hole": false, 00:20:12.384 "seek_data": false, 00:20:12.384 "copy": true, 00:20:12.384 "nvme_iov_md": false 00:20:12.384 }, 00:20:12.384 "driver_specific": { 00:20:12.384 "nvme": [ 00:20:12.384 { 00:20:12.384 "pci_address": "0000:00:11.0", 00:20:12.384 "trid": { 00:20:12.384 "trtype": "PCIe", 00:20:12.384 "traddr": "0000:00:11.0" 00:20:12.384 }, 00:20:12.384 "ctrlr_data": { 00:20:12.384 "cntlid": 0, 00:20:12.384 "vendor_id": "0x1b36", 00:20:12.384 "model_number": "QEMU NVMe Ctrl", 00:20:12.384 "serial_number": "12341", 00:20:12.384 "firmware_revision": "8.0.0", 00:20:12.384 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:12.384 "oacs": { 00:20:12.384 "security": 0, 00:20:12.384 "format": 1, 00:20:12.384 "firmware": 0, 00:20:12.384 "ns_manage": 1 00:20:12.384 }, 00:20:12.384 "multi_ctrlr": false, 00:20:12.384 "ana_reporting": false 00:20:12.384 }, 00:20:12.384 "vs": { 00:20:12.384 "nvme_version": "1.4" 00:20:12.384 }, 00:20:12.384 "ns_data": { 00:20:12.384 "id": 1, 00:20:12.384 "can_share": false 00:20:12.384 } 00:20:12.384 } 00:20:12.384 ], 00:20:12.384 "mp_policy": "active_passive" 00:20:12.384 } 00:20:12.384 } 00:20:12.384 ]' 00:20:12.384 21:52:35 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:12.384 21:52:35 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:12.384 21:52:35 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:12.384 21:52:35 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:12.384 21:52:35 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:12.384 21:52:35 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:20:12.384 21:52:35 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:20:12.384 21:52:35 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:12.384 21:52:35 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:20:12.384 21:52:35 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:12.384 21:52:35 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:12.645 21:52:35 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=3fca38ee-72cf-45ff-9ee7-8ee3534a33e6 00:20:12.645 21:52:35 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:20:12.645 21:52:35 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3fca38ee-72cf-45ff-9ee7-8ee3534a33e6 00:20:12.906 21:52:35 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:13.169 21:52:36 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=3b81d2ba-3c81-42c2-86cf-e81ca1cb2ba4 00:20:13.169 21:52:36 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 3b81d2ba-3c81-42c2-86cf-e81ca1cb2ba4 00:20:13.451 21:52:36 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=645b3868-9505-4676-89d0-f804df47f02f 00:20:13.451 21:52:36 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:20:13.451 21:52:36 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 645b3868-9505-4676-89d0-f804df47f02f 00:20:13.451 21:52:36 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:20:13.451 21:52:36 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:13.451 21:52:36 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=645b3868-9505-4676-89d0-f804df47f02f 00:20:13.451 21:52:36 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:20:13.451 21:52:36 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 645b3868-9505-4676-89d0-f804df47f02f 00:20:13.451 21:52:36 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=645b3868-9505-4676-89d0-f804df47f02f 00:20:13.451 21:52:36 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:13.451 21:52:36 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:13.451 21:52:36 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:13.451 21:52:36 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 645b3868-9505-4676-89d0-f804df47f02f 00:20:13.451 21:52:36 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:13.451 { 00:20:13.451 "name": "645b3868-9505-4676-89d0-f804df47f02f", 00:20:13.451 "aliases": [ 00:20:13.451 "lvs/nvme0n1p0" 00:20:13.451 ], 00:20:13.451 "product_name": "Logical Volume", 00:20:13.451 "block_size": 4096, 00:20:13.451 "num_blocks": 26476544, 00:20:13.451 "uuid": "645b3868-9505-4676-89d0-f804df47f02f", 00:20:13.451 "assigned_rate_limits": { 00:20:13.451 "rw_ios_per_sec": 0, 00:20:13.451 "rw_mbytes_per_sec": 0, 00:20:13.451 "r_mbytes_per_sec": 0, 00:20:13.451 "w_mbytes_per_sec": 0 00:20:13.451 }, 00:20:13.451 "claimed": false, 00:20:13.451 "zoned": false, 00:20:13.451 "supported_io_types": { 00:20:13.451 "read": true, 00:20:13.451 "write": true, 00:20:13.451 "unmap": true, 00:20:13.451 "flush": false, 00:20:13.451 "reset": true, 00:20:13.451 "nvme_admin": false, 00:20:13.451 "nvme_io": false, 00:20:13.451 "nvme_io_md": false, 00:20:13.451 "write_zeroes": true, 00:20:13.451 "zcopy": false, 00:20:13.451 "get_zone_info": false, 00:20:13.451 "zone_management": false, 00:20:13.451 "zone_append": false, 00:20:13.451 "compare": false, 00:20:13.451 "compare_and_write": false, 00:20:13.451 "abort": false, 00:20:13.451 "seek_hole": true, 00:20:13.451 "seek_data": true, 00:20:13.451 "copy": false, 00:20:13.451 "nvme_iov_md": false 00:20:13.451 }, 00:20:13.451 "driver_specific": { 00:20:13.451 "lvol": { 00:20:13.451 "lvol_store_uuid": "3b81d2ba-3c81-42c2-86cf-e81ca1cb2ba4", 00:20:13.451 "base_bdev": "nvme0n1", 00:20:13.451 "thin_provision": true, 00:20:13.451 "num_allocated_clusters": 0, 00:20:13.451 "snapshot": false, 00:20:13.451 "clone": false, 00:20:13.451 "esnap_clone": false 00:20:13.451 } 00:20:13.451 } 00:20:13.451 } 00:20:13.451 ]' 00:20:13.766 21:52:36 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:13.766 21:52:36 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:13.766 21:52:36 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:13.766 21:52:36 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:13.766 21:52:36 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:13.766 21:52:36 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:13.766 21:52:36 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:20:13.766 21:52:36 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:20:13.766 21:52:36 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:13.766 21:52:36 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:13.766 21:52:36 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:13.766 21:52:36 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 645b3868-9505-4676-89d0-f804df47f02f 00:20:13.766 21:52:36 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=645b3868-9505-4676-89d0-f804df47f02f 00:20:13.766 21:52:36 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:13.766 21:52:36 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:13.766 21:52:36 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:13.766 21:52:36 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 645b3868-9505-4676-89d0-f804df47f02f 00:20:14.024 21:52:37 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:14.024 { 00:20:14.024 "name": "645b3868-9505-4676-89d0-f804df47f02f", 00:20:14.024 "aliases": [ 00:20:14.024 "lvs/nvme0n1p0" 00:20:14.024 ], 00:20:14.024 "product_name": "Logical Volume", 00:20:14.024 "block_size": 4096, 00:20:14.024 "num_blocks": 26476544, 00:20:14.024 "uuid": "645b3868-9505-4676-89d0-f804df47f02f", 00:20:14.024 "assigned_rate_limits": { 00:20:14.024 "rw_ios_per_sec": 0, 00:20:14.024 "rw_mbytes_per_sec": 0, 00:20:14.024 "r_mbytes_per_sec": 0, 00:20:14.024 "w_mbytes_per_sec": 0 00:20:14.024 }, 00:20:14.024 "claimed": false, 00:20:14.024 "zoned": false, 00:20:14.024 "supported_io_types": { 00:20:14.024 "read": true, 00:20:14.024 "write": true, 00:20:14.024 "unmap": true, 00:20:14.024 "flush": false, 00:20:14.024 "reset": true, 00:20:14.024 "nvme_admin": false, 00:20:14.024 "nvme_io": false, 00:20:14.024 "nvme_io_md": false, 00:20:14.024 "write_zeroes": true, 00:20:14.024 "zcopy": false, 00:20:14.024 "get_zone_info": false, 00:20:14.025 "zone_management": false, 00:20:14.025 "zone_append": false, 00:20:14.025 "compare": false, 00:20:14.025 "compare_and_write": false, 00:20:14.025 "abort": false, 00:20:14.025 "seek_hole": true, 00:20:14.025 "seek_data": true, 00:20:14.025 "copy": false, 00:20:14.025 "nvme_iov_md": false 00:20:14.025 }, 00:20:14.025 "driver_specific": { 00:20:14.025 "lvol": { 00:20:14.025 "lvol_store_uuid": "3b81d2ba-3c81-42c2-86cf-e81ca1cb2ba4", 00:20:14.025 "base_bdev": "nvme0n1", 00:20:14.025 "thin_provision": true, 00:20:14.025 "num_allocated_clusters": 0, 00:20:14.025 "snapshot": false, 00:20:14.025 "clone": false, 00:20:14.025 "esnap_clone": false 00:20:14.025 } 00:20:14.025 } 00:20:14.025 } 00:20:14.025 ]' 00:20:14.025 21:52:37 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:14.025 21:52:37 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:14.025 21:52:37 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:14.025 21:52:37 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:14.025 21:52:37 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:14.025 21:52:37 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:14.025 21:52:37 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:20:14.025 21:52:37 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:14.283 21:52:37 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:20:14.283 21:52:37 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 645b3868-9505-4676-89d0-f804df47f02f 00:20:14.283 21:52:37 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=645b3868-9505-4676-89d0-f804df47f02f 00:20:14.283 21:52:37 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:14.283 21:52:37 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:14.283 21:52:37 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:14.283 21:52:37 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 645b3868-9505-4676-89d0-f804df47f02f 00:20:14.542 21:52:37 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:14.542 { 00:20:14.543 "name": "645b3868-9505-4676-89d0-f804df47f02f", 00:20:14.543 "aliases": [ 00:20:14.543 "lvs/nvme0n1p0" 00:20:14.543 ], 00:20:14.543 "product_name": "Logical Volume", 00:20:14.543 "block_size": 4096, 00:20:14.543 "num_blocks": 26476544, 00:20:14.543 "uuid": "645b3868-9505-4676-89d0-f804df47f02f", 00:20:14.543 "assigned_rate_limits": { 00:20:14.543 "rw_ios_per_sec": 0, 00:20:14.543 "rw_mbytes_per_sec": 0, 00:20:14.543 "r_mbytes_per_sec": 0, 00:20:14.543 "w_mbytes_per_sec": 0 00:20:14.543 }, 00:20:14.543 "claimed": false, 00:20:14.543 "zoned": false, 00:20:14.543 "supported_io_types": { 00:20:14.543 "read": true, 00:20:14.543 "write": true, 00:20:14.543 "unmap": true, 00:20:14.543 "flush": false, 00:20:14.543 "reset": true, 00:20:14.543 "nvme_admin": false, 00:20:14.543 "nvme_io": false, 00:20:14.543 "nvme_io_md": false, 00:20:14.543 "write_zeroes": true, 00:20:14.543 "zcopy": false, 00:20:14.543 "get_zone_info": false, 00:20:14.543 "zone_management": false, 00:20:14.543 "zone_append": false, 00:20:14.543 "compare": false, 00:20:14.543 "compare_and_write": false, 00:20:14.543 "abort": false, 00:20:14.543 "seek_hole": true, 00:20:14.543 "seek_data": true, 00:20:14.543 "copy": false, 00:20:14.543 "nvme_iov_md": false 00:20:14.543 }, 00:20:14.543 "driver_specific": { 00:20:14.543 "lvol": { 00:20:14.543 "lvol_store_uuid": "3b81d2ba-3c81-42c2-86cf-e81ca1cb2ba4", 00:20:14.543 "base_bdev": "nvme0n1", 00:20:14.543 "thin_provision": true, 00:20:14.543 "num_allocated_clusters": 0, 00:20:14.543 "snapshot": false, 00:20:14.543 "clone": false, 00:20:14.543 "esnap_clone": false 00:20:14.543 } 00:20:14.543 } 00:20:14.543 } 00:20:14.543 ]' 00:20:14.543 21:52:37 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:14.543 21:52:37 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:14.543 21:52:37 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:14.543 21:52:37 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:14.543 21:52:37 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:14.543 21:52:37 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:14.543 21:52:37 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:20:14.543 21:52:37 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 645b3868-9505-4676-89d0-f804df47f02f --l2p_dram_limit 10' 00:20:14.543 21:52:37 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:20:14.543 21:52:37 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:14.543 21:52:37 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:14.543 21:52:37 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:20:14.543 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:20:14.543 21:52:37 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 645b3868-9505-4676-89d0-f804df47f02f --l2p_dram_limit 10 -c nvc0n1p0 00:20:14.804 [2024-11-27 21:52:37.802189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.805 [2024-11-27 21:52:37.802228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:14.805 [2024-11-27 21:52:37.802239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:14.805 [2024-11-27 21:52:37.802247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.805 [2024-11-27 21:52:37.802293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.805 [2024-11-27 21:52:37.802306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:14.805 [2024-11-27 21:52:37.802311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:14.805 [2024-11-27 21:52:37.802320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.805 [2024-11-27 21:52:37.802352] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:14.805 [2024-11-27 21:52:37.802553] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:14.805 [2024-11-27 21:52:37.802565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.805 [2024-11-27 21:52:37.802573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:14.805 [2024-11-27 21:52:37.802579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:20:14.805 [2024-11-27 21:52:37.802587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.805 [2024-11-27 21:52:37.802634] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a55aa5c4-5ffd-42dd-9005-29be45156bb9 00:20:14.805 [2024-11-27 21:52:37.803556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.805 [2024-11-27 21:52:37.803585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:14.805 [2024-11-27 21:52:37.803594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:20:14.805 [2024-11-27 21:52:37.803600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.805 [2024-11-27 21:52:37.808237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.805 [2024-11-27 21:52:37.808263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:14.805 [2024-11-27 21:52:37.808272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.603 ms 00:20:14.805 [2024-11-27 21:52:37.808278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.805 [2024-11-27 21:52:37.808348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.805 [2024-11-27 21:52:37.808354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:14.805 [2024-11-27 21:52:37.808363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:20:14.805 [2024-11-27 21:52:37.808368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.805 [2024-11-27 21:52:37.808407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.805 [2024-11-27 21:52:37.808415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:14.805 [2024-11-27 21:52:37.808423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:14.805 [2024-11-27 21:52:37.808431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.805 [2024-11-27 21:52:37.808448] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:14.805 [2024-11-27 21:52:37.809698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.805 [2024-11-27 21:52:37.809724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:14.805 [2024-11-27 21:52:37.809731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.254 ms 00:20:14.805 [2024-11-27 21:52:37.809738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.805 [2024-11-27 21:52:37.809762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.805 [2024-11-27 21:52:37.809770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:14.805 [2024-11-27 21:52:37.809776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:14.805 [2024-11-27 21:52:37.809784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.805 [2024-11-27 21:52:37.809797] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:14.805 [2024-11-27 21:52:37.809907] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:14.805 [2024-11-27 21:52:37.809915] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:14.805 [2024-11-27 21:52:37.809928] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:14.805 [2024-11-27 21:52:37.809936] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:14.805 [2024-11-27 21:52:37.809946] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:14.805 [2024-11-27 21:52:37.809952] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:14.805 [2024-11-27 21:52:37.809962] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:14.805 [2024-11-27 21:52:37.809967] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:14.805 [2024-11-27 21:52:37.809974] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:14.805 [2024-11-27 21:52:37.809982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.805 [2024-11-27 21:52:37.809989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:14.805 [2024-11-27 21:52:37.809997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.185 ms 00:20:14.805 [2024-11-27 21:52:37.810004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.805 [2024-11-27 21:52:37.810069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.805 [2024-11-27 21:52:37.810081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:14.805 [2024-11-27 21:52:37.810087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:14.805 [2024-11-27 21:52:37.810095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.805 [2024-11-27 21:52:37.810175] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:14.805 [2024-11-27 21:52:37.810183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:14.805 [2024-11-27 21:52:37.810190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:14.805 [2024-11-27 21:52:37.810197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.805 [2024-11-27 21:52:37.810203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:14.805 [2024-11-27 21:52:37.810211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:14.805 [2024-11-27 21:52:37.810216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:14.805 [2024-11-27 21:52:37.810222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:14.805 [2024-11-27 21:52:37.810227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:14.805 [2024-11-27 21:52:37.810234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:14.805 [2024-11-27 21:52:37.810239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:14.805 [2024-11-27 21:52:37.810246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:14.805 [2024-11-27 21:52:37.810251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:14.805 [2024-11-27 21:52:37.810259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:14.805 [2024-11-27 21:52:37.810265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:14.805 [2024-11-27 21:52:37.810272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.805 [2024-11-27 21:52:37.810277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:14.805 [2024-11-27 21:52:37.810283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:14.805 [2024-11-27 21:52:37.810288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.805 [2024-11-27 21:52:37.810295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:14.805 [2024-11-27 21:52:37.810300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:14.805 [2024-11-27 21:52:37.810306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:14.806 [2024-11-27 21:52:37.810311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:14.806 [2024-11-27 21:52:37.810317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:14.806 [2024-11-27 21:52:37.810322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:14.806 [2024-11-27 21:52:37.810329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:14.806 [2024-11-27 21:52:37.810333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:14.806 [2024-11-27 21:52:37.810353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:14.806 [2024-11-27 21:52:37.810359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:14.806 [2024-11-27 21:52:37.810367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:14.806 [2024-11-27 21:52:37.810373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:14.806 [2024-11-27 21:52:37.810382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:14.806 [2024-11-27 21:52:37.810387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:14.806 [2024-11-27 21:52:37.810394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:14.806 [2024-11-27 21:52:37.810400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:14.806 [2024-11-27 21:52:37.810407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:14.806 [2024-11-27 21:52:37.810413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:14.806 [2024-11-27 21:52:37.810420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:14.806 [2024-11-27 21:52:37.810425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:14.806 [2024-11-27 21:52:37.810432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.806 [2024-11-27 21:52:37.810438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:14.806 [2024-11-27 21:52:37.810445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:14.806 [2024-11-27 21:52:37.810450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.806 [2024-11-27 21:52:37.810457] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:14.806 [2024-11-27 21:52:37.810467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:14.806 [2024-11-27 21:52:37.810476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:14.806 [2024-11-27 21:52:37.810484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.806 [2024-11-27 21:52:37.810495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:14.806 [2024-11-27 21:52:37.810501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:14.806 [2024-11-27 21:52:37.810507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:14.806 [2024-11-27 21:52:37.810513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:14.806 [2024-11-27 21:52:37.810521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:14.806 [2024-11-27 21:52:37.810526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:14.806 [2024-11-27 21:52:37.810538] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:14.806 [2024-11-27 21:52:37.810546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:14.806 [2024-11-27 21:52:37.810554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:14.806 [2024-11-27 21:52:37.810561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:14.806 [2024-11-27 21:52:37.810569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:14.806 [2024-11-27 21:52:37.810575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:14.806 [2024-11-27 21:52:37.810582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:14.806 [2024-11-27 21:52:37.810588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:14.806 [2024-11-27 21:52:37.810596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:14.806 [2024-11-27 21:52:37.810602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:14.806 [2024-11-27 21:52:37.810610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:14.806 [2024-11-27 21:52:37.810616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:14.806 [2024-11-27 21:52:37.810623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:14.806 [2024-11-27 21:52:37.810629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:14.806 [2024-11-27 21:52:37.810637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:14.806 [2024-11-27 21:52:37.810643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:14.806 [2024-11-27 21:52:37.810650] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:14.806 [2024-11-27 21:52:37.810657] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:14.806 [2024-11-27 21:52:37.810665] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:14.806 [2024-11-27 21:52:37.810671] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:14.806 [2024-11-27 21:52:37.810679] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:14.806 [2024-11-27 21:52:37.810685] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:14.806 [2024-11-27 21:52:37.810693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.806 [2024-11-27 21:52:37.810699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:14.806 [2024-11-27 21:52:37.810708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.568 ms 00:20:14.806 [2024-11-27 21:52:37.810715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.806 [2024-11-27 21:52:37.810747] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:14.806 [2024-11-27 21:52:37.810754] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:19.015 [2024-11-27 21:52:41.407653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.015 [2024-11-27 21:52:41.407887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:19.015 [2024-11-27 21:52:41.407986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3596.888 ms 00:20:19.015 [2024-11-27 21:52:41.408012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.015 [2024-11-27 21:52:41.417222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.015 [2024-11-27 21:52:41.417406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:19.015 [2024-11-27 21:52:41.417525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.110 ms 00:20:19.015 [2024-11-27 21:52:41.417551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.015 [2024-11-27 21:52:41.417689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.015 [2024-11-27 21:52:41.417714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:19.015 [2024-11-27 21:52:41.417828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:20:19.015 [2024-11-27 21:52:41.417851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.015 [2024-11-27 21:52:41.426781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.015 [2024-11-27 21:52:41.426908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:19.015 [2024-11-27 21:52:41.427075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.868 ms 00:20:19.015 [2024-11-27 21:52:41.427140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.015 [2024-11-27 21:52:41.427186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.015 [2024-11-27 21:52:41.427245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:19.016 [2024-11-27 21:52:41.427312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:19.016 [2024-11-27 21:52:41.427346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.016 [2024-11-27 21:52:41.427746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.016 [2024-11-27 21:52:41.427852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:19.016 [2024-11-27 21:52:41.427910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:20:19.016 [2024-11-27 21:52:41.427933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.016 [2024-11-27 21:52:41.428110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.016 [2024-11-27 21:52:41.428202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:19.016 [2024-11-27 21:52:41.428252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:20:19.016 [2024-11-27 21:52:41.428275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.016 [2024-11-27 21:52:41.433947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.016 [2024-11-27 21:52:41.434058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:19.016 [2024-11-27 21:52:41.434128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.623 ms 00:20:19.016 [2024-11-27 21:52:41.434172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.016 [2024-11-27 21:52:41.451822] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:19.016 [2024-11-27 21:52:41.455073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.016 [2024-11-27 21:52:41.455203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:19.016 [2024-11-27 21:52:41.455268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.819 ms 00:20:19.016 [2024-11-27 21:52:41.455323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.016 [2024-11-27 21:52:41.526025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.016 [2024-11-27 21:52:41.526182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:19.016 [2024-11-27 21:52:41.526238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.618 ms 00:20:19.016 [2024-11-27 21:52:41.526265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.016 [2024-11-27 21:52:41.526468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.016 [2024-11-27 21:52:41.526503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:19.016 [2024-11-27 21:52:41.526569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:20:19.016 [2024-11-27 21:52:41.526594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.016 [2024-11-27 21:52:41.530556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.016 [2024-11-27 21:52:41.530671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:19.016 [2024-11-27 21:52:41.530723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.917 ms 00:20:19.016 [2024-11-27 21:52:41.530748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.016 [2024-11-27 21:52:41.534215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.016 [2024-11-27 21:52:41.534324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:19.016 [2024-11-27 21:52:41.534402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.426 ms 00:20:19.016 [2024-11-27 21:52:41.534425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.016 [2024-11-27 21:52:41.534803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.016 [2024-11-27 21:52:41.534839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:19.016 [2024-11-27 21:52:41.534860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:20:19.016 [2024-11-27 21:52:41.534918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.016 [2024-11-27 21:52:41.569172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.016 [2024-11-27 21:52:41.569293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:19.016 [2024-11-27 21:52:41.569312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.217 ms 00:20:19.016 [2024-11-27 21:52:41.569322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.016 [2024-11-27 21:52:41.574079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.016 [2024-11-27 21:52:41.574117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:19.016 [2024-11-27 21:52:41.574127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.681 ms 00:20:19.016 [2024-11-27 21:52:41.574137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.016 [2024-11-27 21:52:41.577826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.016 [2024-11-27 21:52:41.577861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:19.016 [2024-11-27 21:52:41.577870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.654 ms 00:20:19.016 [2024-11-27 21:52:41.577879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.016 [2024-11-27 21:52:41.582321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.016 [2024-11-27 21:52:41.582372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:19.016 [2024-11-27 21:52:41.582382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.409 ms 00:20:19.016 [2024-11-27 21:52:41.582393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.016 [2024-11-27 21:52:41.582431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.016 [2024-11-27 21:52:41.582446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:19.016 [2024-11-27 21:52:41.582455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:19.016 [2024-11-27 21:52:41.582464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.016 [2024-11-27 21:52:41.582538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.016 [2024-11-27 21:52:41.582550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:19.016 [2024-11-27 21:52:41.582559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:19.016 [2024-11-27 21:52:41.582571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.016 [2024-11-27 21:52:41.583507] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3780.850 ms, result 0 00:20:19.016 { 00:20:19.016 "name": "ftl0", 00:20:19.016 "uuid": "a55aa5c4-5ffd-42dd-9005-29be45156bb9" 00:20:19.016 } 00:20:19.016 21:52:41 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:20:19.016 21:52:41 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:19.016 21:52:41 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:20:19.016 21:52:41 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:19.016 [2024-11-27 21:52:42.047207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.016 [2024-11-27 21:52:42.047273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:19.016 [2024-11-27 21:52:42.047294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:19.016 [2024-11-27 21:52:42.047303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.016 [2024-11-27 21:52:42.047361] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:19.016 [2024-11-27 21:52:42.048182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.016 [2024-11-27 21:52:42.048246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:19.016 [2024-11-27 21:52:42.048259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.803 ms 00:20:19.016 [2024-11-27 21:52:42.048271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.016 [2024-11-27 21:52:42.048568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.016 [2024-11-27 21:52:42.048583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:19.016 [2024-11-27 21:52:42.048596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:20:19.016 [2024-11-27 21:52:42.048605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.016 [2024-11-27 21:52:42.051875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.016 [2024-11-27 21:52:42.051906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:19.016 [2024-11-27 21:52:42.051918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.250 ms 00:20:19.016 [2024-11-27 21:52:42.051928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.016 [2024-11-27 21:52:42.058311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.016 [2024-11-27 21:52:42.058374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:19.016 [2024-11-27 21:52:42.058386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.362 ms 00:20:19.016 [2024-11-27 21:52:42.058404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.016 [2024-11-27 21:52:42.061311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.016 [2024-11-27 21:52:42.061388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:19.016 [2024-11-27 21:52:42.061400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.816 ms 00:20:19.016 [2024-11-27 21:52:42.061411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.016 [2024-11-27 21:52:42.067755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.016 [2024-11-27 21:52:42.067818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:19.016 [2024-11-27 21:52:42.067830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.293 ms 00:20:19.016 [2024-11-27 21:52:42.067842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.016 [2024-11-27 21:52:42.067980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.016 [2024-11-27 21:52:42.067999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:19.016 [2024-11-27 21:52:42.068009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:20:19.016 [2024-11-27 21:52:42.068019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.016 [2024-11-27 21:52:42.071489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.016 [2024-11-27 21:52:42.071730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:19.016 [2024-11-27 21:52:42.071750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.449 ms 00:20:19.016 [2024-11-27 21:52:42.071760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.016 [2024-11-27 21:52:42.075158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.016 [2024-11-27 21:52:42.075363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:19.016 [2024-11-27 21:52:42.075383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.294 ms 00:20:19.016 [2024-11-27 21:52:42.075393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.017 [2024-11-27 21:52:42.077738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.017 [2024-11-27 21:52:42.077804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:19.017 [2024-11-27 21:52:42.077814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.293 ms 00:20:19.017 [2024-11-27 21:52:42.077825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.017 [2024-11-27 21:52:42.080115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.017 [2024-11-27 21:52:42.080172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:19.017 [2024-11-27 21:52:42.080182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.214 ms 00:20:19.017 [2024-11-27 21:52:42.080192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.017 [2024-11-27 21:52:42.080241] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:19.017 [2024-11-27 21:52:42.080260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.080999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.081006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.081016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.081025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.081034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:19.017 [2024-11-27 21:52:42.081042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:19.018 [2024-11-27 21:52:42.081051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:19.018 [2024-11-27 21:52:42.081059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:19.018 [2024-11-27 21:52:42.081072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:19.018 [2024-11-27 21:52:42.081079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:19.018 [2024-11-27 21:52:42.081090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:19.018 [2024-11-27 21:52:42.081098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:19.018 [2024-11-27 21:52:42.081107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:19.018 [2024-11-27 21:52:42.081116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:19.018 [2024-11-27 21:52:42.081127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:19.018 [2024-11-27 21:52:42.081136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:19.018 [2024-11-27 21:52:42.081145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:19.018 [2024-11-27 21:52:42.081153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:19.018 [2024-11-27 21:52:42.081163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:19.018 [2024-11-27 21:52:42.081171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:19.018 [2024-11-27 21:52:42.081180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:19.018 [2024-11-27 21:52:42.081189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:19.018 [2024-11-27 21:52:42.081200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:19.018 [2024-11-27 21:52:42.081208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:19.018 [2024-11-27 21:52:42.081228] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:19.018 [2024-11-27 21:52:42.081238] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a55aa5c4-5ffd-42dd-9005-29be45156bb9 00:20:19.018 [2024-11-27 21:52:42.081248] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:19.018 [2024-11-27 21:52:42.081256] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:19.018 [2024-11-27 21:52:42.081266] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:19.018 [2024-11-27 21:52:42.081274] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:19.018 [2024-11-27 21:52:42.081286] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:19.018 [2024-11-27 21:52:42.081302] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:19.018 [2024-11-27 21:52:42.081311] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:19.018 [2024-11-27 21:52:42.081317] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:19.018 [2024-11-27 21:52:42.081326] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:19.018 [2024-11-27 21:52:42.081347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.018 [2024-11-27 21:52:42.081358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:19.018 [2024-11-27 21:52:42.081368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.107 ms 00:20:19.018 [2024-11-27 21:52:42.081378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.018 [2024-11-27 21:52:42.083875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.018 [2024-11-27 21:52:42.083921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:19.018 [2024-11-27 21:52:42.083942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.476 ms 00:20:19.018 [2024-11-27 21:52:42.083954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.018 [2024-11-27 21:52:42.084085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.018 [2024-11-27 21:52:42.084098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:19.018 [2024-11-27 21:52:42.084108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:20:19.018 [2024-11-27 21:52:42.084117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.018 [2024-11-27 21:52:42.092631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.018 [2024-11-27 21:52:42.092688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:19.018 [2024-11-27 21:52:42.092709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.018 [2024-11-27 21:52:42.092720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.018 [2024-11-27 21:52:42.092793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.018 [2024-11-27 21:52:42.092804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:19.018 [2024-11-27 21:52:42.092813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.018 [2024-11-27 21:52:42.092823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.018 [2024-11-27 21:52:42.092910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.018 [2024-11-27 21:52:42.092928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:19.018 [2024-11-27 21:52:42.092936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.018 [2024-11-27 21:52:42.092949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.018 [2024-11-27 21:52:42.092968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.018 [2024-11-27 21:52:42.092980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:19.018 [2024-11-27 21:52:42.092988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.018 [2024-11-27 21:52:42.093001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.018 [2024-11-27 21:52:42.107181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.018 [2024-11-27 21:52:42.107487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:19.018 [2024-11-27 21:52:42.107515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.018 [2024-11-27 21:52:42.107525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.018 [2024-11-27 21:52:42.118439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.018 [2024-11-27 21:52:42.118493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:19.018 [2024-11-27 21:52:42.118505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.018 [2024-11-27 21:52:42.118515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.018 [2024-11-27 21:52:42.118615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.018 [2024-11-27 21:52:42.118631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:19.018 [2024-11-27 21:52:42.118639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.018 [2024-11-27 21:52:42.118654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.018 [2024-11-27 21:52:42.118708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.018 [2024-11-27 21:52:42.118721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:19.018 [2024-11-27 21:52:42.118734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.018 [2024-11-27 21:52:42.118744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.018 [2024-11-27 21:52:42.118818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.018 [2024-11-27 21:52:42.118832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:19.018 [2024-11-27 21:52:42.118840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.018 [2024-11-27 21:52:42.118850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.018 [2024-11-27 21:52:42.118890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.018 [2024-11-27 21:52:42.118904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:19.018 [2024-11-27 21:52:42.118912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.018 [2024-11-27 21:52:42.118924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.018 [2024-11-27 21:52:42.118967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.018 [2024-11-27 21:52:42.118983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:19.018 [2024-11-27 21:52:42.118992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.018 [2024-11-27 21:52:42.119004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.018 [2024-11-27 21:52:42.119056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.018 [2024-11-27 21:52:42.119071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:19.018 [2024-11-27 21:52:42.119082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.018 [2024-11-27 21:52:42.119098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.018 [2024-11-27 21:52:42.119242] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 71.998 ms, result 0 00:20:19.018 true 00:20:19.280 21:52:42 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 87936 00:20:19.280 21:52:42 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 87936 ']' 00:20:19.280 21:52:42 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 87936 00:20:19.280 21:52:42 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:20:19.280 21:52:42 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:19.280 21:52:42 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87936 00:20:19.280 killing process with pid 87936 00:20:19.280 21:52:42 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:19.280 21:52:42 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:19.280 21:52:42 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87936' 00:20:19.280 21:52:42 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 87936 00:20:19.280 21:52:42 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 87936 00:20:22.578 21:52:45 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:20:26.783 262144+0 records in 00:20:26.783 262144+0 records out 00:20:26.783 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.00977 s, 268 MB/s 00:20:26.783 21:52:49 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:28.697 21:52:51 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:28.697 [2024-11-27 21:52:51.675248] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:20:28.697 [2024-11-27 21:52:51.675344] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88156 ] 00:20:28.958 [2024-11-27 21:52:51.818810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.958 [2024-11-27 21:52:51.838604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.958 [2024-11-27 21:52:51.937889] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:28.958 [2024-11-27 21:52:51.937962] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:29.222 [2024-11-27 21:52:52.098282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.222 [2024-11-27 21:52:52.098363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:29.222 [2024-11-27 21:52:52.098384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:29.222 [2024-11-27 21:52:52.098392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.222 [2024-11-27 21:52:52.098459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.222 [2024-11-27 21:52:52.098471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:29.222 [2024-11-27 21:52:52.098480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:29.222 [2024-11-27 21:52:52.098494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.222 [2024-11-27 21:52:52.098522] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:29.222 [2024-11-27 21:52:52.099077] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:29.222 [2024-11-27 21:52:52.099131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.222 [2024-11-27 21:52:52.099142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:29.222 [2024-11-27 21:52:52.099156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.615 ms 00:20:29.222 [2024-11-27 21:52:52.099165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.222 [2024-11-27 21:52:52.100938] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:29.222 [2024-11-27 21:52:52.105027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.222 [2024-11-27 21:52:52.105078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:29.222 [2024-11-27 21:52:52.105092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.092 ms 00:20:29.222 [2024-11-27 21:52:52.105115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.222 [2024-11-27 21:52:52.105192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.222 [2024-11-27 21:52:52.105205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:29.222 [2024-11-27 21:52:52.105214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:29.222 [2024-11-27 21:52:52.105222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.222 [2024-11-27 21:52:52.113431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.222 [2024-11-27 21:52:52.113472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:29.222 [2024-11-27 21:52:52.113490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.166 ms 00:20:29.222 [2024-11-27 21:52:52.113497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.222 [2024-11-27 21:52:52.113624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.222 [2024-11-27 21:52:52.113636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:29.222 [2024-11-27 21:52:52.113645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:20:29.222 [2024-11-27 21:52:52.113656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.222 [2024-11-27 21:52:52.113724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.222 [2024-11-27 21:52:52.113736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:29.222 [2024-11-27 21:52:52.113745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:29.222 [2024-11-27 21:52:52.113756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.222 [2024-11-27 21:52:52.113780] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:29.222 [2024-11-27 21:52:52.115948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.222 [2024-11-27 21:52:52.116122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:29.222 [2024-11-27 21:52:52.116140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.175 ms 00:20:29.222 [2024-11-27 21:52:52.116148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.222 [2024-11-27 21:52:52.116186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.222 [2024-11-27 21:52:52.116198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:29.222 [2024-11-27 21:52:52.116207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:29.222 [2024-11-27 21:52:52.116220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.222 [2024-11-27 21:52:52.116244] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:29.222 [2024-11-27 21:52:52.116265] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:29.222 [2024-11-27 21:52:52.116310] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:29.222 [2024-11-27 21:52:52.116327] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:29.222 [2024-11-27 21:52:52.116452] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:29.222 [2024-11-27 21:52:52.116466] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:29.222 [2024-11-27 21:52:52.116480] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:29.222 [2024-11-27 21:52:52.116490] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:29.222 [2024-11-27 21:52:52.116499] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:29.222 [2024-11-27 21:52:52.116509] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:29.222 [2024-11-27 21:52:52.116517] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:29.222 [2024-11-27 21:52:52.116525] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:29.222 [2024-11-27 21:52:52.116533] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:29.222 [2024-11-27 21:52:52.116545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.222 [2024-11-27 21:52:52.116555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:29.222 [2024-11-27 21:52:52.116564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:20:29.222 [2024-11-27 21:52:52.116573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.222 [2024-11-27 21:52:52.116661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.222 [2024-11-27 21:52:52.116673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:29.222 [2024-11-27 21:52:52.116682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:29.222 [2024-11-27 21:52:52.116690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.222 [2024-11-27 21:52:52.116795] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:29.222 [2024-11-27 21:52:52.116811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:29.222 [2024-11-27 21:52:52.116820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:29.222 [2024-11-27 21:52:52.116833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.222 [2024-11-27 21:52:52.116845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:29.222 [2024-11-27 21:52:52.116854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:29.222 [2024-11-27 21:52:52.116862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:29.222 [2024-11-27 21:52:52.116871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:29.222 [2024-11-27 21:52:52.116883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:29.222 [2024-11-27 21:52:52.116892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:29.223 [2024-11-27 21:52:52.116905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:29.223 [2024-11-27 21:52:52.116914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:29.223 [2024-11-27 21:52:52.116922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:29.223 [2024-11-27 21:52:52.116929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:29.223 [2024-11-27 21:52:52.116939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:29.223 [2024-11-27 21:52:52.116948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.223 [2024-11-27 21:52:52.116957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:29.223 [2024-11-27 21:52:52.116965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:29.223 [2024-11-27 21:52:52.116973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.223 [2024-11-27 21:52:52.116983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:29.223 [2024-11-27 21:52:52.116992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:29.223 [2024-11-27 21:52:52.116999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:29.223 [2024-11-27 21:52:52.117007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:29.223 [2024-11-27 21:52:52.117014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:29.223 [2024-11-27 21:52:52.117022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:29.223 [2024-11-27 21:52:52.117030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:29.223 [2024-11-27 21:52:52.117041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:29.223 [2024-11-27 21:52:52.117049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:29.223 [2024-11-27 21:52:52.117055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:29.223 [2024-11-27 21:52:52.117062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:29.223 [2024-11-27 21:52:52.117068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:29.223 [2024-11-27 21:52:52.117075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:29.223 [2024-11-27 21:52:52.117081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:29.223 [2024-11-27 21:52:52.117089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:29.223 [2024-11-27 21:52:52.117096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:29.223 [2024-11-27 21:52:52.117102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:29.223 [2024-11-27 21:52:52.117108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:29.223 [2024-11-27 21:52:52.117115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:29.223 [2024-11-27 21:52:52.117121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:29.223 [2024-11-27 21:52:52.117130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.223 [2024-11-27 21:52:52.117136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:29.223 [2024-11-27 21:52:52.117142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:29.223 [2024-11-27 21:52:52.117151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.223 [2024-11-27 21:52:52.117158] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:29.223 [2024-11-27 21:52:52.117169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:29.223 [2024-11-27 21:52:52.117178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:29.223 [2024-11-27 21:52:52.117185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.223 [2024-11-27 21:52:52.117196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:29.223 [2024-11-27 21:52:52.117204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:29.223 [2024-11-27 21:52:52.117211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:29.223 [2024-11-27 21:52:52.117219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:29.223 [2024-11-27 21:52:52.117226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:29.223 [2024-11-27 21:52:52.117233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:29.223 [2024-11-27 21:52:52.117241] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:29.223 [2024-11-27 21:52:52.117249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:29.223 [2024-11-27 21:52:52.117258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:29.223 [2024-11-27 21:52:52.117265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:29.223 [2024-11-27 21:52:52.117272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:29.223 [2024-11-27 21:52:52.117283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:29.223 [2024-11-27 21:52:52.117291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:29.223 [2024-11-27 21:52:52.117298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:29.223 [2024-11-27 21:52:52.117305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:29.223 [2024-11-27 21:52:52.117313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:29.223 [2024-11-27 21:52:52.117320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:29.223 [2024-11-27 21:52:52.117348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:29.223 [2024-11-27 21:52:52.117356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:29.223 [2024-11-27 21:52:52.117364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:29.223 [2024-11-27 21:52:52.117371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:29.223 [2024-11-27 21:52:52.117378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:29.223 [2024-11-27 21:52:52.117386] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:29.223 [2024-11-27 21:52:52.117394] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:29.223 [2024-11-27 21:52:52.117403] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:29.223 [2024-11-27 21:52:52.117411] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:29.223 [2024-11-27 21:52:52.117419] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:29.223 [2024-11-27 21:52:52.117429] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:29.223 [2024-11-27 21:52:52.117438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.223 [2024-11-27 21:52:52.117446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:29.223 [2024-11-27 21:52:52.117453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.712 ms 00:20:29.223 [2024-11-27 21:52:52.117464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.223 [2024-11-27 21:52:52.130732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.223 [2024-11-27 21:52:52.130908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:29.223 [2024-11-27 21:52:52.130927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.205 ms 00:20:29.223 [2024-11-27 21:52:52.130935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.223 [2024-11-27 21:52:52.131043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.223 [2024-11-27 21:52:52.131053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:29.223 [2024-11-27 21:52:52.131062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:20:29.223 [2024-11-27 21:52:52.131070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.223 [2024-11-27 21:52:52.158692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.223 [2024-11-27 21:52:52.158890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:29.223 [2024-11-27 21:52:52.158911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.577 ms 00:20:29.223 [2024-11-27 21:52:52.158921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.223 [2024-11-27 21:52:52.158972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.223 [2024-11-27 21:52:52.158982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:29.223 [2024-11-27 21:52:52.158991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:29.223 [2024-11-27 21:52:52.159005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.223 [2024-11-27 21:52:52.159579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.223 [2024-11-27 21:52:52.159611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:29.223 [2024-11-27 21:52:52.159624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.510 ms 00:20:29.223 [2024-11-27 21:52:52.159633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.223 [2024-11-27 21:52:52.159792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.223 [2024-11-27 21:52:52.159815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:29.223 [2024-11-27 21:52:52.159825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:20:29.223 [2024-11-27 21:52:52.159834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.223 [2024-11-27 21:52:52.167591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.223 [2024-11-27 21:52:52.167634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:29.223 [2024-11-27 21:52:52.167646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.730 ms 00:20:29.223 [2024-11-27 21:52:52.167661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.223 [2024-11-27 21:52:52.171547] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:29.223 [2024-11-27 21:52:52.171598] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:29.223 [2024-11-27 21:52:52.171612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.224 [2024-11-27 21:52:52.171621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:29.224 [2024-11-27 21:52:52.171631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.857 ms 00:20:29.224 [2024-11-27 21:52:52.171639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.224 [2024-11-27 21:52:52.187419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.224 [2024-11-27 21:52:52.187466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:29.224 [2024-11-27 21:52:52.187478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.722 ms 00:20:29.224 [2024-11-27 21:52:52.187492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.224 [2024-11-27 21:52:52.190142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.224 [2024-11-27 21:52:52.190323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:29.224 [2024-11-27 21:52:52.190364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.586 ms 00:20:29.224 [2024-11-27 21:52:52.190372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.224 [2024-11-27 21:52:52.193076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.224 [2024-11-27 21:52:52.193122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:29.224 [2024-11-27 21:52:52.193133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.665 ms 00:20:29.224 [2024-11-27 21:52:52.193141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.224 [2024-11-27 21:52:52.193558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.224 [2024-11-27 21:52:52.193582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:29.224 [2024-11-27 21:52:52.193593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:20:29.224 [2024-11-27 21:52:52.193601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.224 [2024-11-27 21:52:52.216813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.224 [2024-11-27 21:52:52.217023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:29.224 [2024-11-27 21:52:52.217044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.190 ms 00:20:29.224 [2024-11-27 21:52:52.217061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.224 [2024-11-27 21:52:52.225167] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:29.224 [2024-11-27 21:52:52.228171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.224 [2024-11-27 21:52:52.228346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:29.224 [2024-11-27 21:52:52.228365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.061 ms 00:20:29.224 [2024-11-27 21:52:52.228382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.224 [2024-11-27 21:52:52.228462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.224 [2024-11-27 21:52:52.228474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:29.224 [2024-11-27 21:52:52.228484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:29.224 [2024-11-27 21:52:52.228501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.224 [2024-11-27 21:52:52.228570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.224 [2024-11-27 21:52:52.228581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:29.224 [2024-11-27 21:52:52.228591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:29.224 [2024-11-27 21:52:52.228607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.224 [2024-11-27 21:52:52.228628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.224 [2024-11-27 21:52:52.228638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:29.224 [2024-11-27 21:52:52.228648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:29.224 [2024-11-27 21:52:52.228657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.224 [2024-11-27 21:52:52.228695] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:29.224 [2024-11-27 21:52:52.228706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.224 [2024-11-27 21:52:52.228715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:29.224 [2024-11-27 21:52:52.228725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:29.224 [2024-11-27 21:52:52.228734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.224 [2024-11-27 21:52:52.233729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.224 [2024-11-27 21:52:52.233775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:29.224 [2024-11-27 21:52:52.233787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.975 ms 00:20:29.224 [2024-11-27 21:52:52.233795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.224 [2024-11-27 21:52:52.233877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.224 [2024-11-27 21:52:52.233888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:29.224 [2024-11-27 21:52:52.233899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:20:29.224 [2024-11-27 21:52:52.233908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.224 [2024-11-27 21:52:52.235035] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 136.301 ms, result 0 00:20:30.166  [2024-11-27T21:52:54.672Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-27T21:52:55.606Z] Copying: 20/1024 [MB] (10 MBps) [2024-11-27T21:52:56.546Z] Copying: 41/1024 [MB] (20 MBps) [2024-11-27T21:52:57.489Z] Copying: 63/1024 [MB] (22 MBps) [2024-11-27T21:52:58.433Z] Copying: 80/1024 [MB] (16 MBps) [2024-11-27T21:52:59.376Z] Copying: 96/1024 [MB] (16 MBps) [2024-11-27T21:53:00.317Z] Copying: 111/1024 [MB] (15 MBps) [2024-11-27T21:53:01.262Z] Copying: 123/1024 [MB] (11 MBps) [2024-11-27T21:53:02.646Z] Copying: 142/1024 [MB] (18 MBps) [2024-11-27T21:53:03.593Z] Copying: 158/1024 [MB] (16 MBps) [2024-11-27T21:53:04.536Z] Copying: 169/1024 [MB] (11 MBps) [2024-11-27T21:53:05.480Z] Copying: 181/1024 [MB] (11 MBps) [2024-11-27T21:53:06.419Z] Copying: 192/1024 [MB] (11 MBps) [2024-11-27T21:53:07.362Z] Copying: 218/1024 [MB] (26 MBps) [2024-11-27T21:53:08.307Z] Copying: 238/1024 [MB] (19 MBps) [2024-11-27T21:53:09.255Z] Copying: 254/1024 [MB] (16 MBps) [2024-11-27T21:53:10.640Z] Copying: 268/1024 [MB] (14 MBps) [2024-11-27T21:53:11.584Z] Copying: 280/1024 [MB] (11 MBps) [2024-11-27T21:53:12.525Z] Copying: 290/1024 [MB] (10 MBps) [2024-11-27T21:53:13.520Z] Copying: 302/1024 [MB] (11 MBps) [2024-11-27T21:53:14.466Z] Copying: 317/1024 [MB] (14 MBps) [2024-11-27T21:53:15.401Z] Copying: 329/1024 [MB] (12 MBps) [2024-11-27T21:53:16.337Z] Copying: 349/1024 [MB] (19 MBps) [2024-11-27T21:53:17.273Z] Copying: 383/1024 [MB] (34 MBps) [2024-11-27T21:53:18.662Z] Copying: 407/1024 [MB] (23 MBps) [2024-11-27T21:53:19.604Z] Copying: 424/1024 [MB] (17 MBps) [2024-11-27T21:53:20.545Z] Copying: 441/1024 [MB] (16 MBps) [2024-11-27T21:53:21.479Z] Copying: 451/1024 [MB] (10 MBps) [2024-11-27T21:53:22.414Z] Copying: 482/1024 [MB] (31 MBps) [2024-11-27T21:53:23.350Z] Copying: 511/1024 [MB] (28 MBps) [2024-11-27T21:53:24.293Z] Copying: 543/1024 [MB] (32 MBps) [2024-11-27T21:53:25.680Z] Copying: 558/1024 [MB] (14 MBps) [2024-11-27T21:53:26.247Z] Copying: 578/1024 [MB] (19 MBps) [2024-11-27T21:53:27.622Z] Copying: 598/1024 [MB] (20 MBps) [2024-11-27T21:53:28.557Z] Copying: 636/1024 [MB] (38 MBps) [2024-11-27T21:53:29.492Z] Copying: 676/1024 [MB] (39 MBps) [2024-11-27T21:53:30.424Z] Copying: 698/1024 [MB] (22 MBps) [2024-11-27T21:53:31.363Z] Copying: 731/1024 [MB] (32 MBps) [2024-11-27T21:53:32.297Z] Copying: 760/1024 [MB] (28 MBps) [2024-11-27T21:53:33.675Z] Copying: 780/1024 [MB] (20 MBps) [2024-11-27T21:53:34.614Z] Copying: 797/1024 [MB] (17 MBps) [2024-11-27T21:53:35.551Z] Copying: 823/1024 [MB] (25 MBps) [2024-11-27T21:53:36.492Z] Copying: 842/1024 [MB] (19 MBps) [2024-11-27T21:53:37.430Z] Copying: 856/1024 [MB] (13 MBps) [2024-11-27T21:53:38.360Z] Copying: 879/1024 [MB] (23 MBps) [2024-11-27T21:53:39.293Z] Copying: 905/1024 [MB] (26 MBps) [2024-11-27T21:53:40.659Z] Copying: 931/1024 [MB] (25 MBps) [2024-11-27T21:53:41.590Z] Copying: 955/1024 [MB] (24 MBps) [2024-11-27T21:53:42.520Z] Copying: 979/1024 [MB] (24 MBps) [2024-11-27T21:53:43.089Z] Copying: 1011/1024 [MB] (31 MBps) [2024-11-27T21:53:43.089Z] Copying: 1024/1024 [MB] (average 20 MBps)[2024-11-27 21:53:42.820977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.968 [2024-11-27 21:53:42.821014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:19.968 [2024-11-27 21:53:42.821025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:21:19.968 [2024-11-27 21:53:42.821035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.968 [2024-11-27 21:53:42.821056] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:19.968 [2024-11-27 21:53:42.821452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.968 [2024-11-27 21:53:42.821475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:19.968 [2024-11-27 21:53:42.821485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.385 ms 00:21:19.968 [2024-11-27 21:53:42.821492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.968 [2024-11-27 21:53:42.823019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.968 [2024-11-27 21:53:42.823128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:19.968 [2024-11-27 21:53:42.823142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.511 ms 00:21:19.968 [2024-11-27 21:53:42.823153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.968 [2024-11-27 21:53:42.839186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.968 [2024-11-27 21:53:42.839213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:19.968 [2024-11-27 21:53:42.839221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.017 ms 00:21:19.968 [2024-11-27 21:53:42.839233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.968 [2024-11-27 21:53:42.844023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.968 [2024-11-27 21:53:42.844044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:19.968 [2024-11-27 21:53:42.844052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.767 ms 00:21:19.968 [2024-11-27 21:53:42.844059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.968 [2024-11-27 21:53:42.845112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.968 [2024-11-27 21:53:42.845139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:19.968 [2024-11-27 21:53:42.845146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.012 ms 00:21:19.968 [2024-11-27 21:53:42.845152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.968 [2024-11-27 21:53:42.848348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.968 [2024-11-27 21:53:42.848372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:19.968 [2024-11-27 21:53:42.848380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.173 ms 00:21:19.968 [2024-11-27 21:53:42.848386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.968 [2024-11-27 21:53:42.848472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.968 [2024-11-27 21:53:42.848479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:19.968 [2024-11-27 21:53:42.848486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:21:19.968 [2024-11-27 21:53:42.848491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.968 [2024-11-27 21:53:42.850847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.968 [2024-11-27 21:53:42.850873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:19.968 [2024-11-27 21:53:42.850879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.342 ms 00:21:19.968 [2024-11-27 21:53:42.850884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.968 [2024-11-27 21:53:42.853103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.968 [2024-11-27 21:53:42.853126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:19.968 [2024-11-27 21:53:42.853133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.197 ms 00:21:19.968 [2024-11-27 21:53:42.853138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.968 [2024-11-27 21:53:42.854651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.968 [2024-11-27 21:53:42.854676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:19.968 [2024-11-27 21:53:42.854682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.491 ms 00:21:19.968 [2024-11-27 21:53:42.854687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.968 [2024-11-27 21:53:42.856358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.968 [2024-11-27 21:53:42.856381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:19.968 [2024-11-27 21:53:42.856388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.632 ms 00:21:19.968 [2024-11-27 21:53:42.856393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.968 [2024-11-27 21:53:42.856413] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:19.968 [2024-11-27 21:53:42.856424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:19.968 [2024-11-27 21:53:42.856431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:19.968 [2024-11-27 21:53:42.856437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:19.968 [2024-11-27 21:53:42.856443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:19.968 [2024-11-27 21:53:42.856448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:19.968 [2024-11-27 21:53:42.856454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:19.968 [2024-11-27 21:53:42.856459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:19.968 [2024-11-27 21:53:42.856465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:19.968 [2024-11-27 21:53:42.856471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:19.968 [2024-11-27 21:53:42.856476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:19.968 [2024-11-27 21:53:42.856482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:19.968 [2024-11-27 21:53:42.856487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:19.968 [2024-11-27 21:53:42.856494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:19.968 [2024-11-27 21:53:42.856499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:19.968 [2024-11-27 21:53:42.856505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:19.968 [2024-11-27 21:53:42.856511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:19.968 [2024-11-27 21:53:42.856517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:19.968 [2024-11-27 21:53:42.856523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:19.968 [2024-11-27 21:53:42.856529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:19.968 [2024-11-27 21:53:42.856534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:19.968 [2024-11-27 21:53:42.856541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:19.968 [2024-11-27 21:53:42.856547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:19.968 [2024-11-27 21:53:42.856552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:19.968 [2024-11-27 21:53:42.856558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:19.968 [2024-11-27 21:53:42.856563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:19.968 [2024-11-27 21:53:42.856569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:19.968 [2024-11-27 21:53:42.856575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:19.968 [2024-11-27 21:53:42.856582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:19.968 [2024-11-27 21:53:42.856587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:19.968 [2024-11-27 21:53:42.856593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:19.968 [2024-11-27 21:53:42.856600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:19.968 [2024-11-27 21:53:42.856605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:19.968 [2024-11-27 21:53:42.856611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:19.968 [2024-11-27 21:53:42.856617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:19.968 [2024-11-27 21:53:42.856622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:19.968 [2024-11-27 21:53:42.856628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:19.968 [2024-11-27 21:53:42.856633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:19.968 [2024-11-27 21:53:42.856640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:19.968 [2024-11-27 21:53:42.856646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.856996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:19.969 [2024-11-27 21:53:42.857007] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:19.969 [2024-11-27 21:53:42.857013] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a55aa5c4-5ffd-42dd-9005-29be45156bb9 00:21:19.969 [2024-11-27 21:53:42.857019] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:19.969 [2024-11-27 21:53:42.857025] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:19.969 [2024-11-27 21:53:42.857030] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:19.969 [2024-11-27 21:53:42.857037] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:19.969 [2024-11-27 21:53:42.857042] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:19.969 [2024-11-27 21:53:42.857048] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:19.969 [2024-11-27 21:53:42.857055] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:19.969 [2024-11-27 21:53:42.857060] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:19.969 [2024-11-27 21:53:42.857065] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:19.969 [2024-11-27 21:53:42.857070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.969 [2024-11-27 21:53:42.857078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:19.969 [2024-11-27 21:53:42.857088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.657 ms 00:21:19.969 [2024-11-27 21:53:42.857098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.969 [2024-11-27 21:53:42.858316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.969 [2024-11-27 21:53:42.858352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:19.969 [2024-11-27 21:53:42.858364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.207 ms 00:21:19.969 [2024-11-27 21:53:42.858371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.969 [2024-11-27 21:53:42.858439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.969 [2024-11-27 21:53:42.858446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:19.969 [2024-11-27 21:53:42.858455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:19.969 [2024-11-27 21:53:42.858461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.969 [2024-11-27 21:53:42.862515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.969 [2024-11-27 21:53:42.862541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:19.969 [2024-11-27 21:53:42.862549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.969 [2024-11-27 21:53:42.862554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.969 [2024-11-27 21:53:42.862596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.969 [2024-11-27 21:53:42.862602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:19.969 [2024-11-27 21:53:42.862608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.969 [2024-11-27 21:53:42.862613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.969 [2024-11-27 21:53:42.862640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.969 [2024-11-27 21:53:42.862648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:19.969 [2024-11-27 21:53:42.862654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.969 [2024-11-27 21:53:42.862659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.970 [2024-11-27 21:53:42.862671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.970 [2024-11-27 21:53:42.862679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:19.970 [2024-11-27 21:53:42.862685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.970 [2024-11-27 21:53:42.862690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.970 [2024-11-27 21:53:42.870054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.970 [2024-11-27 21:53:42.870085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:19.970 [2024-11-27 21:53:42.870093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.970 [2024-11-27 21:53:42.870099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.970 [2024-11-27 21:53:42.876036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.970 [2024-11-27 21:53:42.876202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:19.970 [2024-11-27 21:53:42.876214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.970 [2024-11-27 21:53:42.876220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.970 [2024-11-27 21:53:42.876256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.970 [2024-11-27 21:53:42.876263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:19.970 [2024-11-27 21:53:42.876275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.970 [2024-11-27 21:53:42.876281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.970 [2024-11-27 21:53:42.876300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.970 [2024-11-27 21:53:42.876306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:19.970 [2024-11-27 21:53:42.876315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.970 [2024-11-27 21:53:42.876324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.970 [2024-11-27 21:53:42.876391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.970 [2024-11-27 21:53:42.876400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:19.970 [2024-11-27 21:53:42.876406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.970 [2024-11-27 21:53:42.876412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.970 [2024-11-27 21:53:42.876433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.970 [2024-11-27 21:53:42.876441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:19.970 [2024-11-27 21:53:42.876448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.970 [2024-11-27 21:53:42.876456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.970 [2024-11-27 21:53:42.876485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.970 [2024-11-27 21:53:42.876493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:19.970 [2024-11-27 21:53:42.876499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.970 [2024-11-27 21:53:42.876507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.970 [2024-11-27 21:53:42.876541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.970 [2024-11-27 21:53:42.876550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:19.970 [2024-11-27 21:53:42.876559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.970 [2024-11-27 21:53:42.876565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.970 [2024-11-27 21:53:42.876659] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 55.658 ms, result 0 00:21:20.258 00:21:20.259 00:21:20.259 21:53:43 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:21:20.518 [2024-11-27 21:53:43.410776] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:21:20.518 [2024-11-27 21:53:43.411028] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88689 ] 00:21:20.518 [2024-11-27 21:53:43.551759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.518 [2024-11-27 21:53:43.572832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.789 [2024-11-27 21:53:43.657580] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:20.789 [2024-11-27 21:53:43.657633] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:20.789 [2024-11-27 21:53:43.806154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.789 [2024-11-27 21:53:43.806187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:20.789 [2024-11-27 21:53:43.806199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:20.789 [2024-11-27 21:53:43.806205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.789 [2024-11-27 21:53:43.806246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.789 [2024-11-27 21:53:43.806254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:20.789 [2024-11-27 21:53:43.806260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:21:20.789 [2024-11-27 21:53:43.806269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.789 [2024-11-27 21:53:43.806285] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:20.789 [2024-11-27 21:53:43.806688] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:20.790 [2024-11-27 21:53:43.806711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.790 [2024-11-27 21:53:43.806720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:20.790 [2024-11-27 21:53:43.806728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.432 ms 00:21:20.790 [2024-11-27 21:53:43.806735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.790 [2024-11-27 21:53:43.807725] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:20.790 [2024-11-27 21:53:43.809696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.790 [2024-11-27 21:53:43.809819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:20.790 [2024-11-27 21:53:43.809832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.972 ms 00:21:20.790 [2024-11-27 21:53:43.809843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.790 [2024-11-27 21:53:43.809880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.790 [2024-11-27 21:53:43.809887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:20.790 [2024-11-27 21:53:43.809895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:20.790 [2024-11-27 21:53:43.809901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.790 [2024-11-27 21:53:43.814179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.790 [2024-11-27 21:53:43.814202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:20.790 [2024-11-27 21:53:43.814213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.250 ms 00:21:20.790 [2024-11-27 21:53:43.814224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.790 [2024-11-27 21:53:43.814285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.790 [2024-11-27 21:53:43.814292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:20.790 [2024-11-27 21:53:43.814300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:21:20.790 [2024-11-27 21:53:43.814306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.790 [2024-11-27 21:53:43.814360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.790 [2024-11-27 21:53:43.814369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:20.790 [2024-11-27 21:53:43.814376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:20.790 [2024-11-27 21:53:43.814386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.790 [2024-11-27 21:53:43.814403] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:20.790 [2024-11-27 21:53:43.815543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.790 [2024-11-27 21:53:43.815565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:20.790 [2024-11-27 21:53:43.815572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.144 ms 00:21:20.790 [2024-11-27 21:53:43.815577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.790 [2024-11-27 21:53:43.815599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.790 [2024-11-27 21:53:43.815605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:20.790 [2024-11-27 21:53:43.815611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:20.790 [2024-11-27 21:53:43.815619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.790 [2024-11-27 21:53:43.815636] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:20.790 [2024-11-27 21:53:43.815651] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:20.790 [2024-11-27 21:53:43.815680] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:20.790 [2024-11-27 21:53:43.815692] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:20.790 [2024-11-27 21:53:43.815774] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:20.790 [2024-11-27 21:53:43.815782] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:20.790 [2024-11-27 21:53:43.815794] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:20.790 [2024-11-27 21:53:43.815801] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:20.790 [2024-11-27 21:53:43.815809] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:20.790 [2024-11-27 21:53:43.815817] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:20.790 [2024-11-27 21:53:43.815823] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:20.790 [2024-11-27 21:53:43.815828] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:20.790 [2024-11-27 21:53:43.815834] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:20.790 [2024-11-27 21:53:43.815842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.790 [2024-11-27 21:53:43.815848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:20.790 [2024-11-27 21:53:43.815854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:21:20.790 [2024-11-27 21:53:43.815859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.790 [2024-11-27 21:53:43.815924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.790 [2024-11-27 21:53:43.815931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:20.790 [2024-11-27 21:53:43.815937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:20.790 [2024-11-27 21:53:43.815948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.790 [2024-11-27 21:53:43.816028] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:20.790 [2024-11-27 21:53:43.816036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:20.790 [2024-11-27 21:53:43.816042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:20.790 [2024-11-27 21:53:43.816048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:20.790 [2024-11-27 21:53:43.816054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:20.790 [2024-11-27 21:53:43.816059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:20.790 [2024-11-27 21:53:43.816064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:20.790 [2024-11-27 21:53:43.816071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:20.790 [2024-11-27 21:53:43.816076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:20.790 [2024-11-27 21:53:43.816081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:20.790 [2024-11-27 21:53:43.816088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:20.790 [2024-11-27 21:53:43.816094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:20.790 [2024-11-27 21:53:43.816101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:20.790 [2024-11-27 21:53:43.816106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:20.790 [2024-11-27 21:53:43.816111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:20.790 [2024-11-27 21:53:43.816116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:20.790 [2024-11-27 21:53:43.816122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:20.790 [2024-11-27 21:53:43.816127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:20.790 [2024-11-27 21:53:43.816131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:20.790 [2024-11-27 21:53:43.816137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:20.790 [2024-11-27 21:53:43.816141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:20.790 [2024-11-27 21:53:43.816146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:20.790 [2024-11-27 21:53:43.816151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:20.790 [2024-11-27 21:53:43.816156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:20.790 [2024-11-27 21:53:43.816161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:20.790 [2024-11-27 21:53:43.816167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:20.790 [2024-11-27 21:53:43.816171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:20.790 [2024-11-27 21:53:43.816176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:20.790 [2024-11-27 21:53:43.816184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:20.790 [2024-11-27 21:53:43.816190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:20.790 [2024-11-27 21:53:43.816195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:20.790 [2024-11-27 21:53:43.816202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:20.790 [2024-11-27 21:53:43.816208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:20.790 [2024-11-27 21:53:43.816214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:20.790 [2024-11-27 21:53:43.816219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:20.790 [2024-11-27 21:53:43.816225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:20.790 [2024-11-27 21:53:43.816230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:20.790 [2024-11-27 21:53:43.816236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:20.790 [2024-11-27 21:53:43.816241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:20.790 [2024-11-27 21:53:43.816247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:20.790 [2024-11-27 21:53:43.816253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:20.790 [2024-11-27 21:53:43.816259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:20.790 [2024-11-27 21:53:43.816265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:20.790 [2024-11-27 21:53:43.816272] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:20.790 [2024-11-27 21:53:43.816281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:20.790 [2024-11-27 21:53:43.816287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:20.790 [2024-11-27 21:53:43.816294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:20.790 [2024-11-27 21:53:43.816303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:20.790 [2024-11-27 21:53:43.816309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:20.791 [2024-11-27 21:53:43.816315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:20.791 [2024-11-27 21:53:43.816321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:20.791 [2024-11-27 21:53:43.816326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:20.791 [2024-11-27 21:53:43.816332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:20.791 [2024-11-27 21:53:43.816352] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:20.791 [2024-11-27 21:53:43.816359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:20.791 [2024-11-27 21:53:43.816367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:20.791 [2024-11-27 21:53:43.816373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:20.791 [2024-11-27 21:53:43.816381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:20.791 [2024-11-27 21:53:43.816387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:20.791 [2024-11-27 21:53:43.816393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:20.791 [2024-11-27 21:53:43.816401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:20.791 [2024-11-27 21:53:43.816407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:20.791 [2024-11-27 21:53:43.816413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:20.791 [2024-11-27 21:53:43.816419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:20.791 [2024-11-27 21:53:43.816429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:20.791 [2024-11-27 21:53:43.816436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:20.791 [2024-11-27 21:53:43.816442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:20.791 [2024-11-27 21:53:43.816448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:20.791 [2024-11-27 21:53:43.816454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:20.791 [2024-11-27 21:53:43.816460] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:20.791 [2024-11-27 21:53:43.816467] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:20.791 [2024-11-27 21:53:43.816474] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:20.791 [2024-11-27 21:53:43.816480] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:20.791 [2024-11-27 21:53:43.816487] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:20.791 [2024-11-27 21:53:43.816494] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:20.791 [2024-11-27 21:53:43.816501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.791 [2024-11-27 21:53:43.816511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:20.791 [2024-11-27 21:53:43.816517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:21:20.791 [2024-11-27 21:53:43.816526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.791 [2024-11-27 21:53:43.824131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.791 [2024-11-27 21:53:43.824157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:20.791 [2024-11-27 21:53:43.824166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.569 ms 00:21:20.791 [2024-11-27 21:53:43.824172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.791 [2024-11-27 21:53:43.824232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.791 [2024-11-27 21:53:43.824238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:20.791 [2024-11-27 21:53:43.824244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:21:20.791 [2024-11-27 21:53:43.824249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.791 [2024-11-27 21:53:43.837215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.791 [2024-11-27 21:53:43.837329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:20.791 [2024-11-27 21:53:43.837352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.931 ms 00:21:20.791 [2024-11-27 21:53:43.837359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.791 [2024-11-27 21:53:43.837398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.791 [2024-11-27 21:53:43.837406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:20.791 [2024-11-27 21:53:43.837413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:21:20.791 [2024-11-27 21:53:43.837418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.791 [2024-11-27 21:53:43.837735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.791 [2024-11-27 21:53:43.837754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:20.791 [2024-11-27 21:53:43.837766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:21:20.791 [2024-11-27 21:53:43.837772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.791 [2024-11-27 21:53:43.837868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.791 [2024-11-27 21:53:43.837879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:20.791 [2024-11-27 21:53:43.837886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:21:20.791 [2024-11-27 21:53:43.837892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.791 [2024-11-27 21:53:43.843052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.791 [2024-11-27 21:53:43.843084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:20.791 [2024-11-27 21:53:43.843094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.143 ms 00:21:20.791 [2024-11-27 21:53:43.843103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.791 [2024-11-27 21:53:43.845663] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:20.791 [2024-11-27 21:53:43.845697] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:20.791 [2024-11-27 21:53:43.845716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.791 [2024-11-27 21:53:43.845724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:20.791 [2024-11-27 21:53:43.845733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.529 ms 00:21:20.791 [2024-11-27 21:53:43.845741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.791 [2024-11-27 21:53:43.858021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.791 [2024-11-27 21:53:43.858054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:20.791 [2024-11-27 21:53:43.858069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.241 ms 00:21:20.791 [2024-11-27 21:53:43.858076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.791 [2024-11-27 21:53:43.860106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.791 [2024-11-27 21:53:43.860136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:20.791 [2024-11-27 21:53:43.860144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.991 ms 00:21:20.791 [2024-11-27 21:53:43.860149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.791 [2024-11-27 21:53:43.861947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.791 [2024-11-27 21:53:43.861979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:20.791 [2024-11-27 21:53:43.861987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.772 ms 00:21:20.791 [2024-11-27 21:53:43.861994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.791 [2024-11-27 21:53:43.862240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.791 [2024-11-27 21:53:43.862250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:20.791 [2024-11-27 21:53:43.862257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:21:20.791 [2024-11-27 21:53:43.862265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.791 [2024-11-27 21:53:43.876415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.791 [2024-11-27 21:53:43.876449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:20.791 [2024-11-27 21:53:43.876458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.132 ms 00:21:20.791 [2024-11-27 21:53:43.876464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.791 [2024-11-27 21:53:43.882097] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:20.791 [2024-11-27 21:53:43.883897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.791 [2024-11-27 21:53:43.884031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:20.791 [2024-11-27 21:53:43.884044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.402 ms 00:21:20.791 [2024-11-27 21:53:43.884054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.791 [2024-11-27 21:53:43.884095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.791 [2024-11-27 21:53:43.884104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:20.791 [2024-11-27 21:53:43.884118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:20.791 [2024-11-27 21:53:43.884124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.791 [2024-11-27 21:53:43.884180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.791 [2024-11-27 21:53:43.884187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:20.791 [2024-11-27 21:53:43.884198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:21:20.791 [2024-11-27 21:53:43.884203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.791 [2024-11-27 21:53:43.884220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.791 [2024-11-27 21:53:43.884229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:20.791 [2024-11-27 21:53:43.884237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:20.791 [2024-11-27 21:53:43.884243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.791 [2024-11-27 21:53:43.884272] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:20.791 [2024-11-27 21:53:43.884280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.792 [2024-11-27 21:53:43.884285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:20.792 [2024-11-27 21:53:43.884294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:20.792 [2024-11-27 21:53:43.884301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.792 [2024-11-27 21:53:43.887358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.792 [2024-11-27 21:53:43.887383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:20.792 [2024-11-27 21:53:43.887391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.042 ms 00:21:20.792 [2024-11-27 21:53:43.887398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.792 [2024-11-27 21:53:43.887451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.792 [2024-11-27 21:53:43.887458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:20.792 [2024-11-27 21:53:43.887464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:21:20.792 [2024-11-27 21:53:43.887470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.792 [2024-11-27 21:53:43.888171] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 81.703 ms, result 0 00:21:22.247  [2024-11-27T21:53:46.312Z] Copying: 19/1024 [MB] (19 MBps) [2024-11-27T21:53:47.250Z] Copying: 38/1024 [MB] (18 MBps) [2024-11-27T21:53:48.196Z] Copying: 63/1024 [MB] (24 MBps) [2024-11-27T21:53:49.141Z] Copying: 79/1024 [MB] (15 MBps) [2024-11-27T21:53:50.081Z] Copying: 96/1024 [MB] (16 MBps) [2024-11-27T21:53:51.023Z] Copying: 109/1024 [MB] (13 MBps) [2024-11-27T21:53:52.404Z] Copying: 139/1024 [MB] (29 MBps) [2024-11-27T21:53:53.347Z] Copying: 156/1024 [MB] (16 MBps) [2024-11-27T21:53:54.293Z] Copying: 175/1024 [MB] (19 MBps) [2024-11-27T21:53:55.238Z] Copying: 195/1024 [MB] (20 MBps) [2024-11-27T21:53:56.180Z] Copying: 213/1024 [MB] (18 MBps) [2024-11-27T21:53:57.122Z] Copying: 232/1024 [MB] (18 MBps) [2024-11-27T21:53:58.065Z] Copying: 252/1024 [MB] (19 MBps) [2024-11-27T21:53:59.454Z] Copying: 276/1024 [MB] (24 MBps) [2024-11-27T21:54:00.026Z] Copying: 293/1024 [MB] (17 MBps) [2024-11-27T21:54:01.411Z] Copying: 305/1024 [MB] (11 MBps) [2024-11-27T21:54:02.352Z] Copying: 317/1024 [MB] (12 MBps) [2024-11-27T21:54:03.296Z] Copying: 329/1024 [MB] (11 MBps) [2024-11-27T21:54:04.238Z] Copying: 345/1024 [MB] (15 MBps) [2024-11-27T21:54:05.177Z] Copying: 361/1024 [MB] (16 MBps) [2024-11-27T21:54:06.118Z] Copying: 378/1024 [MB] (16 MBps) [2024-11-27T21:54:07.062Z] Copying: 397/1024 [MB] (19 MBps) [2024-11-27T21:54:08.450Z] Copying: 411/1024 [MB] (13 MBps) [2024-11-27T21:54:09.023Z] Copying: 429/1024 [MB] (18 MBps) [2024-11-27T21:54:10.408Z] Copying: 441/1024 [MB] (12 MBps) [2024-11-27T21:54:11.353Z] Copying: 457/1024 [MB] (15 MBps) [2024-11-27T21:54:12.295Z] Copying: 470/1024 [MB] (13 MBps) [2024-11-27T21:54:13.244Z] Copying: 480/1024 [MB] (10 MBps) [2024-11-27T21:54:14.185Z] Copying: 491/1024 [MB] (10 MBps) [2024-11-27T21:54:15.131Z] Copying: 507/1024 [MB] (16 MBps) [2024-11-27T21:54:16.096Z] Copying: 519/1024 [MB] (11 MBps) [2024-11-27T21:54:17.075Z] Copying: 532/1024 [MB] (13 MBps) [2024-11-27T21:54:18.463Z] Copying: 547/1024 [MB] (14 MBps) [2024-11-27T21:54:19.035Z] Copying: 559/1024 [MB] (11 MBps) [2024-11-27T21:54:20.434Z] Copying: 570/1024 [MB] (10 MBps) [2024-11-27T21:54:21.378Z] Copying: 580/1024 [MB] (10 MBps) [2024-11-27T21:54:22.323Z] Copying: 590/1024 [MB] (10 MBps) [2024-11-27T21:54:23.267Z] Copying: 601/1024 [MB] (10 MBps) [2024-11-27T21:54:24.228Z] Copying: 611/1024 [MB] (10 MBps) [2024-11-27T21:54:25.173Z] Copying: 622/1024 [MB] (10 MBps) [2024-11-27T21:54:26.119Z] Copying: 633/1024 [MB] (10 MBps) [2024-11-27T21:54:27.063Z] Copying: 643/1024 [MB] (10 MBps) [2024-11-27T21:54:28.450Z] Copying: 653/1024 [MB] (10 MBps) [2024-11-27T21:54:29.023Z] Copying: 664/1024 [MB] (10 MBps) [2024-11-27T21:54:30.413Z] Copying: 674/1024 [MB] (10 MBps) [2024-11-27T21:54:31.359Z] Copying: 685/1024 [MB] (10 MBps) [2024-11-27T21:54:32.302Z] Copying: 696/1024 [MB] (10 MBps) [2024-11-27T21:54:33.247Z] Copying: 706/1024 [MB] (10 MBps) [2024-11-27T21:54:34.189Z] Copying: 717/1024 [MB] (10 MBps) [2024-11-27T21:54:35.132Z] Copying: 727/1024 [MB] (10 MBps) [2024-11-27T21:54:36.076Z] Copying: 738/1024 [MB] (10 MBps) [2024-11-27T21:54:37.460Z] Copying: 748/1024 [MB] (10 MBps) [2024-11-27T21:54:38.034Z] Copying: 758/1024 [MB] (10 MBps) [2024-11-27T21:54:39.423Z] Copying: 769/1024 [MB] (10 MBps) [2024-11-27T21:54:40.367Z] Copying: 779/1024 [MB] (10 MBps) [2024-11-27T21:54:41.311Z] Copying: 790/1024 [MB] (10 MBps) [2024-11-27T21:54:42.255Z] Copying: 809/1024 [MB] (19 MBps) [2024-11-27T21:54:43.199Z] Copying: 820/1024 [MB] (10 MBps) [2024-11-27T21:54:44.142Z] Copying: 830/1024 [MB] (10 MBps) [2024-11-27T21:54:45.085Z] Copying: 846/1024 [MB] (15 MBps) [2024-11-27T21:54:46.027Z] Copying: 867/1024 [MB] (21 MBps) [2024-11-27T21:54:47.414Z] Copying: 882/1024 [MB] (15 MBps) [2024-11-27T21:54:48.068Z] Copying: 893/1024 [MB] (11 MBps) [2024-11-27T21:54:49.459Z] Copying: 904/1024 [MB] (10 MBps) [2024-11-27T21:54:50.032Z] Copying: 920/1024 [MB] (15 MBps) [2024-11-27T21:54:51.420Z] Copying: 936/1024 [MB] (15 MBps) [2024-11-27T21:54:52.366Z] Copying: 955/1024 [MB] (19 MBps) [2024-11-27T21:54:53.309Z] Copying: 966/1024 [MB] (10 MBps) [2024-11-27T21:54:54.252Z] Copying: 984/1024 [MB] (18 MBps) [2024-11-27T21:54:55.195Z] Copying: 1003/1024 [MB] (19 MBps) [2024-11-27T21:54:55.195Z] Copying: 1022/1024 [MB] (19 MBps) [2024-11-27T21:54:55.766Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-27 21:54:55.456661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.645 [2024-11-27 21:54:55.457120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:32.645 [2024-11-27 21:54:55.457177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:32.645 [2024-11-27 21:54:55.457199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.645 [2024-11-27 21:54:55.457258] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:32.645 [2024-11-27 21:54:55.458461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.645 [2024-11-27 21:54:55.458534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:32.645 [2024-11-27 21:54:55.458557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.168 ms 00:22:32.645 [2024-11-27 21:54:55.458575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.645 [2024-11-27 21:54:55.459070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.645 [2024-11-27 21:54:55.459093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:32.645 [2024-11-27 21:54:55.459115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.450 ms 00:22:32.645 [2024-11-27 21:54:55.459139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.645 [2024-11-27 21:54:55.468653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.645 [2024-11-27 21:54:55.468695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:32.645 [2024-11-27 21:54:55.468718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.482 ms 00:22:32.645 [2024-11-27 21:54:55.468735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.645 [2024-11-27 21:54:55.475579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.645 [2024-11-27 21:54:55.475627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:32.645 [2024-11-27 21:54:55.475641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.809 ms 00:22:32.645 [2024-11-27 21:54:55.475650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.645 [2024-11-27 21:54:55.478808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.645 [2024-11-27 21:54:55.478863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:32.645 [2024-11-27 21:54:55.478875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.081 ms 00:22:32.645 [2024-11-27 21:54:55.478884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.645 [2024-11-27 21:54:55.485214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.645 [2024-11-27 21:54:55.485632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:32.645 [2024-11-27 21:54:55.485680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.274 ms 00:22:32.645 [2024-11-27 21:54:55.485721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.645 [2024-11-27 21:54:55.485994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.645 [2024-11-27 21:54:55.486018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:32.645 [2024-11-27 21:54:55.486039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:22:32.645 [2024-11-27 21:54:55.486077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.645 [2024-11-27 21:54:55.489843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.645 [2024-11-27 21:54:55.489922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:32.645 [2024-11-27 21:54:55.489944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.727 ms 00:22:32.645 [2024-11-27 21:54:55.489960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.645 [2024-11-27 21:54:55.493094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.645 [2024-11-27 21:54:55.493437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:32.645 [2024-11-27 21:54:55.493472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.061 ms 00:22:32.645 [2024-11-27 21:54:55.493489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.645 [2024-11-27 21:54:55.495833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.645 [2024-11-27 21:54:55.495884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:32.645 [2024-11-27 21:54:55.495895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.278 ms 00:22:32.645 [2024-11-27 21:54:55.495902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.645 [2024-11-27 21:54:55.498323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.645 [2024-11-27 21:54:55.498386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:32.645 [2024-11-27 21:54:55.498396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.343 ms 00:22:32.645 [2024-11-27 21:54:55.498405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.645 [2024-11-27 21:54:55.498447] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:32.645 [2024-11-27 21:54:55.498464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:32.645 [2024-11-27 21:54:55.498476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:32.645 [2024-11-27 21:54:55.498484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.498993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.499001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.499009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.499017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.499025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.499033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.499041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.499049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.499058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.499066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.499074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.499082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.499090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.499097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.499106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.499114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.499122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.499130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.499137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.499146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.499154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.499162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.499170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.499178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.499187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.499196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.499204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.499211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.499219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:32.646 [2024-11-27 21:54:55.499227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:32.647 [2024-11-27 21:54:55.499234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:32.647 [2024-11-27 21:54:55.499242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:32.647 [2024-11-27 21:54:55.499251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:32.647 [2024-11-27 21:54:55.499259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:32.647 [2024-11-27 21:54:55.499266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:32.647 [2024-11-27 21:54:55.499274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:32.647 [2024-11-27 21:54:55.499282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:32.647 [2024-11-27 21:54:55.499289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:32.647 [2024-11-27 21:54:55.499307] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:32.647 [2024-11-27 21:54:55.499316] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a55aa5c4-5ffd-42dd-9005-29be45156bb9 00:22:32.647 [2024-11-27 21:54:55.499324] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:32.647 [2024-11-27 21:54:55.499354] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:32.647 [2024-11-27 21:54:55.499362] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:32.647 [2024-11-27 21:54:55.499372] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:32.647 [2024-11-27 21:54:55.499380] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:32.647 [2024-11-27 21:54:55.499388] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:32.647 [2024-11-27 21:54:55.499404] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:32.647 [2024-11-27 21:54:55.499412] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:32.647 [2024-11-27 21:54:55.499419] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:32.647 [2024-11-27 21:54:55.499429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.647 [2024-11-27 21:54:55.499441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:32.647 [2024-11-27 21:54:55.499451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.983 ms 00:22:32.647 [2024-11-27 21:54:55.499463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.647 [2024-11-27 21:54:55.501966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.647 [2024-11-27 21:54:55.502001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:32.647 [2024-11-27 21:54:55.502013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.483 ms 00:22:32.647 [2024-11-27 21:54:55.502022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.647 [2024-11-27 21:54:55.502176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:32.647 [2024-11-27 21:54:55.502186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:32.647 [2024-11-27 21:54:55.502195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:22:32.647 [2024-11-27 21:54:55.502204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.647 [2024-11-27 21:54:55.509973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.647 [2024-11-27 21:54:55.510026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:32.647 [2024-11-27 21:54:55.510039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.647 [2024-11-27 21:54:55.510048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.647 [2024-11-27 21:54:55.510126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.647 [2024-11-27 21:54:55.510135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:32.647 [2024-11-27 21:54:55.510143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.647 [2024-11-27 21:54:55.510151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.647 [2024-11-27 21:54:55.510221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.647 [2024-11-27 21:54:55.510232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:32.647 [2024-11-27 21:54:55.510241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.647 [2024-11-27 21:54:55.510248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.647 [2024-11-27 21:54:55.510269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.647 [2024-11-27 21:54:55.510278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:32.647 [2024-11-27 21:54:55.510287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.647 [2024-11-27 21:54:55.510295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.647 [2024-11-27 21:54:55.523799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.647 [2024-11-27 21:54:55.523849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:32.647 [2024-11-27 21:54:55.523861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.647 [2024-11-27 21:54:55.523870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.647 [2024-11-27 21:54:55.534022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.647 [2024-11-27 21:54:55.534068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:32.647 [2024-11-27 21:54:55.534079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.647 [2024-11-27 21:54:55.534096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.647 [2024-11-27 21:54:55.534144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.647 [2024-11-27 21:54:55.534153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:32.647 [2024-11-27 21:54:55.534166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.647 [2024-11-27 21:54:55.534174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.647 [2024-11-27 21:54:55.534210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.647 [2024-11-27 21:54:55.534223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:32.647 [2024-11-27 21:54:55.534231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.647 [2024-11-27 21:54:55.534240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.647 [2024-11-27 21:54:55.534313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.647 [2024-11-27 21:54:55.534323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:32.647 [2024-11-27 21:54:55.534386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.647 [2024-11-27 21:54:55.534396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.647 [2024-11-27 21:54:55.534425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.647 [2024-11-27 21:54:55.534435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:32.647 [2024-11-27 21:54:55.534446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.647 [2024-11-27 21:54:55.534454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.647 [2024-11-27 21:54:55.534497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.647 [2024-11-27 21:54:55.534507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:32.647 [2024-11-27 21:54:55.534516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.647 [2024-11-27 21:54:55.534524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.647 [2024-11-27 21:54:55.534571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:32.647 [2024-11-27 21:54:55.534584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:32.647 [2024-11-27 21:54:55.534593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:32.647 [2024-11-27 21:54:55.534608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:32.647 [2024-11-27 21:54:55.534747] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 78.090 ms, result 0 00:22:32.647 00:22:32.647 00:22:32.647 21:54:55 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:35.194 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:35.194 21:54:57 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:22:35.194 [2024-11-27 21:54:58.023353] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:22:35.194 [2024-11-27 21:54:58.023494] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89458 ] 00:22:35.194 [2024-11-27 21:54:58.171610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.194 [2024-11-27 21:54:58.192720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.194 [2024-11-27 21:54:58.295447] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:35.194 [2024-11-27 21:54:58.295522] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:35.457 [2024-11-27 21:54:58.456074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.457 [2024-11-27 21:54:58.456131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:35.457 [2024-11-27 21:54:58.456146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:35.457 [2024-11-27 21:54:58.456154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.457 [2024-11-27 21:54:58.456214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.457 [2024-11-27 21:54:58.456225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:35.457 [2024-11-27 21:54:58.456235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:22:35.457 [2024-11-27 21:54:58.456248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.457 [2024-11-27 21:54:58.456280] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:35.457 [2024-11-27 21:54:58.456594] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:35.457 [2024-11-27 21:54:58.456616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.457 [2024-11-27 21:54:58.456630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:35.457 [2024-11-27 21:54:58.456642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:22:35.457 [2024-11-27 21:54:58.456650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.457 [2024-11-27 21:54:58.458406] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:35.457 [2024-11-27 21:54:58.462243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.457 [2024-11-27 21:54:58.462297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:35.457 [2024-11-27 21:54:58.462308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.839 ms 00:22:35.457 [2024-11-27 21:54:58.462325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.457 [2024-11-27 21:54:58.462413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.457 [2024-11-27 21:54:58.462426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:35.457 [2024-11-27 21:54:58.462435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:22:35.457 [2024-11-27 21:54:58.462448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.457 [2024-11-27 21:54:58.470742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.457 [2024-11-27 21:54:58.470788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:35.457 [2024-11-27 21:54:58.470806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.251 ms 00:22:35.457 [2024-11-27 21:54:58.470815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.457 [2024-11-27 21:54:58.470918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.457 [2024-11-27 21:54:58.470928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:35.457 [2024-11-27 21:54:58.470937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:22:35.457 [2024-11-27 21:54:58.470947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.457 [2024-11-27 21:54:58.471007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.457 [2024-11-27 21:54:58.471018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:35.458 [2024-11-27 21:54:58.471026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:35.458 [2024-11-27 21:54:58.471038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.458 [2024-11-27 21:54:58.471062] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:35.458 [2024-11-27 21:54:58.473247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.458 [2024-11-27 21:54:58.473291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:35.458 [2024-11-27 21:54:58.473305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.191 ms 00:22:35.458 [2024-11-27 21:54:58.473324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.458 [2024-11-27 21:54:58.473378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.458 [2024-11-27 21:54:58.473387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:35.458 [2024-11-27 21:54:58.473395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:35.458 [2024-11-27 21:54:58.473407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.458 [2024-11-27 21:54:58.473430] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:35.458 [2024-11-27 21:54:58.473451] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:35.458 [2024-11-27 21:54:58.473493] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:35.458 [2024-11-27 21:54:58.473508] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:35.458 [2024-11-27 21:54:58.473615] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:35.458 [2024-11-27 21:54:58.473626] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:35.458 [2024-11-27 21:54:58.473642] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:35.458 [2024-11-27 21:54:58.473654] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:35.458 [2024-11-27 21:54:58.473664] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:35.458 [2024-11-27 21:54:58.473672] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:35.458 [2024-11-27 21:54:58.473679] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:35.458 [2024-11-27 21:54:58.473686] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:35.458 [2024-11-27 21:54:58.473697] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:35.458 [2024-11-27 21:54:58.473706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.458 [2024-11-27 21:54:58.473714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:35.458 [2024-11-27 21:54:58.473722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:22:35.458 [2024-11-27 21:54:58.473732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.458 [2024-11-27 21:54:58.473819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.458 [2024-11-27 21:54:58.473831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:35.458 [2024-11-27 21:54:58.473838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:35.458 [2024-11-27 21:54:58.473845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.458 [2024-11-27 21:54:58.473948] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:35.458 [2024-11-27 21:54:58.473960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:35.458 [2024-11-27 21:54:58.473969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:35.458 [2024-11-27 21:54:58.473979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:35.458 [2024-11-27 21:54:58.473988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:35.458 [2024-11-27 21:54:58.473995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:35.458 [2024-11-27 21:54:58.474003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:35.458 [2024-11-27 21:54:58.474012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:35.458 [2024-11-27 21:54:58.474021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:35.458 [2024-11-27 21:54:58.474028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:35.458 [2024-11-27 21:54:58.474039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:35.458 [2024-11-27 21:54:58.474047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:35.458 [2024-11-27 21:54:58.474055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:35.458 [2024-11-27 21:54:58.474063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:35.458 [2024-11-27 21:54:58.474070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:35.458 [2024-11-27 21:54:58.474077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:35.458 [2024-11-27 21:54:58.474085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:35.458 [2024-11-27 21:54:58.474093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:35.458 [2024-11-27 21:54:58.474101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:35.458 [2024-11-27 21:54:58.474109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:35.458 [2024-11-27 21:54:58.474116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:35.458 [2024-11-27 21:54:58.474126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:35.458 [2024-11-27 21:54:58.474134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:35.458 [2024-11-27 21:54:58.474142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:35.458 [2024-11-27 21:54:58.474150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:35.458 [2024-11-27 21:54:58.474157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:35.458 [2024-11-27 21:54:58.474169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:35.458 [2024-11-27 21:54:58.474177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:35.458 [2024-11-27 21:54:58.474185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:35.458 [2024-11-27 21:54:58.474193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:35.458 [2024-11-27 21:54:58.474201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:35.458 [2024-11-27 21:54:58.474209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:35.458 [2024-11-27 21:54:58.474216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:35.458 [2024-11-27 21:54:58.474224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:35.458 [2024-11-27 21:54:58.474232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:35.458 [2024-11-27 21:54:58.474240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:35.458 [2024-11-27 21:54:58.474248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:35.458 [2024-11-27 21:54:58.474255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:35.458 [2024-11-27 21:54:58.474264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:35.458 [2024-11-27 21:54:58.474270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:35.458 [2024-11-27 21:54:58.474276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:35.458 [2024-11-27 21:54:58.474282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:35.458 [2024-11-27 21:54:58.474292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:35.458 [2024-11-27 21:54:58.474299] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:35.458 [2024-11-27 21:54:58.474309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:35.458 [2024-11-27 21:54:58.474319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:35.458 [2024-11-27 21:54:58.474329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:35.458 [2024-11-27 21:54:58.474352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:35.458 [2024-11-27 21:54:58.474359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:35.458 [2024-11-27 21:54:58.474366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:35.458 [2024-11-27 21:54:58.474372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:35.458 [2024-11-27 21:54:58.474378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:35.458 [2024-11-27 21:54:58.474385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:35.458 [2024-11-27 21:54:58.474395] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:35.458 [2024-11-27 21:54:58.474406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:35.458 [2024-11-27 21:54:58.474415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:35.458 [2024-11-27 21:54:58.474423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:35.458 [2024-11-27 21:54:58.474432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:35.458 [2024-11-27 21:54:58.474442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:35.458 [2024-11-27 21:54:58.474450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:35.458 [2024-11-27 21:54:58.474458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:35.458 [2024-11-27 21:54:58.474476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:35.458 [2024-11-27 21:54:58.474485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:35.458 [2024-11-27 21:54:58.474492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:35.458 [2024-11-27 21:54:58.474505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:35.458 [2024-11-27 21:54:58.474512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:35.459 [2024-11-27 21:54:58.474519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:35.459 [2024-11-27 21:54:58.474527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:35.459 [2024-11-27 21:54:58.474534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:35.459 [2024-11-27 21:54:58.474541] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:35.459 [2024-11-27 21:54:58.474549] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:35.459 [2024-11-27 21:54:58.474557] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:35.459 [2024-11-27 21:54:58.474564] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:35.459 [2024-11-27 21:54:58.474571] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:35.459 [2024-11-27 21:54:58.474579] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:35.459 [2024-11-27 21:54:58.474589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.459 [2024-11-27 21:54:58.474597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:35.459 [2024-11-27 21:54:58.474608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.709 ms 00:22:35.459 [2024-11-27 21:54:58.474618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.459 [2024-11-27 21:54:58.488444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.459 [2024-11-27 21:54:58.488487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:35.459 [2024-11-27 21:54:58.488498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.778 ms 00:22:35.459 [2024-11-27 21:54:58.488506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.459 [2024-11-27 21:54:58.488592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.459 [2024-11-27 21:54:58.488607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:35.459 [2024-11-27 21:54:58.488615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:22:35.459 [2024-11-27 21:54:58.488623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.459 [2024-11-27 21:54:58.509882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.459 [2024-11-27 21:54:58.509940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:35.459 [2024-11-27 21:54:58.509955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.202 ms 00:22:35.459 [2024-11-27 21:54:58.509964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.459 [2024-11-27 21:54:58.510022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.459 [2024-11-27 21:54:58.510034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:35.459 [2024-11-27 21:54:58.510045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:35.459 [2024-11-27 21:54:58.510063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.459 [2024-11-27 21:54:58.510628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.459 [2024-11-27 21:54:58.510670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:35.459 [2024-11-27 21:54:58.510684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.496 ms 00:22:35.459 [2024-11-27 21:54:58.510695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.459 [2024-11-27 21:54:58.510875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.459 [2024-11-27 21:54:58.510895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:35.459 [2024-11-27 21:54:58.510907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:22:35.459 [2024-11-27 21:54:58.510923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.459 [2024-11-27 21:54:58.518191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.459 [2024-11-27 21:54:58.518397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:35.459 [2024-11-27 21:54:58.518414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.243 ms 00:22:35.459 [2024-11-27 21:54:58.518423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.459 [2024-11-27 21:54:58.521802] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:35.459 [2024-11-27 21:54:58.521957] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:35.459 [2024-11-27 21:54:58.521980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.459 [2024-11-27 21:54:58.521988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:35.459 [2024-11-27 21:54:58.521996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.459 ms 00:22:35.459 [2024-11-27 21:54:58.522003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.459 [2024-11-27 21:54:58.537424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.459 [2024-11-27 21:54:58.537465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:35.459 [2024-11-27 21:54:58.537477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.375 ms 00:22:35.459 [2024-11-27 21:54:58.537485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.459 [2024-11-27 21:54:58.540058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.459 [2024-11-27 21:54:58.540101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:35.459 [2024-11-27 21:54:58.540111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.523 ms 00:22:35.459 [2024-11-27 21:54:58.540119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.459 [2024-11-27 21:54:58.542569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.459 [2024-11-27 21:54:58.542611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:35.459 [2024-11-27 21:54:58.542621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.409 ms 00:22:35.459 [2024-11-27 21:54:58.542629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.459 [2024-11-27 21:54:58.542961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.459 [2024-11-27 21:54:58.542972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:35.459 [2024-11-27 21:54:58.542986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:22:35.459 [2024-11-27 21:54:58.542994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.459 [2024-11-27 21:54:58.564126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.459 [2024-11-27 21:54:58.564345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:35.459 [2024-11-27 21:54:58.564366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.108 ms 00:22:35.459 [2024-11-27 21:54:58.564382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.459 [2024-11-27 21:54:58.572181] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:35.459 [2024-11-27 21:54:58.575064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.459 [2024-11-27 21:54:58.575102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:35.459 [2024-11-27 21:54:58.575114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.637 ms 00:22:35.459 [2024-11-27 21:54:58.575129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.459 [2024-11-27 21:54:58.575203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.459 [2024-11-27 21:54:58.575219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:35.459 [2024-11-27 21:54:58.575238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:35.459 [2024-11-27 21:54:58.575246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.459 [2024-11-27 21:54:58.575316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.459 [2024-11-27 21:54:58.575329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:35.459 [2024-11-27 21:54:58.575493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:22:35.459 [2024-11-27 21:54:58.575516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.459 [2024-11-27 21:54:58.575556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.459 [2024-11-27 21:54:58.575578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:35.459 [2024-11-27 21:54:58.575598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:35.721 [2024-11-27 21:54:58.575617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.721 [2024-11-27 21:54:58.575661] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:35.721 [2024-11-27 21:54:58.575672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.721 [2024-11-27 21:54:58.575680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:35.721 [2024-11-27 21:54:58.575691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:35.721 [2024-11-27 21:54:58.575699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.721 [2024-11-27 21:54:58.580534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.721 [2024-11-27 21:54:58.580578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:35.721 [2024-11-27 21:54:58.580590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.813 ms 00:22:35.721 [2024-11-27 21:54:58.580598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.721 [2024-11-27 21:54:58.580678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:35.721 [2024-11-27 21:54:58.580693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:35.721 [2024-11-27 21:54:58.580708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:22:35.721 [2024-11-27 21:54:58.580718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:35.721 [2024-11-27 21:54:58.581819] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 125.260 ms, result 0 00:22:36.664  [2024-11-27T21:55:00.731Z] Copying: 13/1024 [MB] (13 MBps) [2024-11-27T21:55:01.674Z] Copying: 24/1024 [MB] (10 MBps) [2024-11-27T21:55:02.616Z] Copying: 42/1024 [MB] (18 MBps) [2024-11-27T21:55:04.006Z] Copying: 64/1024 [MB] (22 MBps) [2024-11-27T21:55:04.951Z] Copying: 76/1024 [MB] (11 MBps) [2024-11-27T21:55:05.897Z] Copying: 92/1024 [MB] (16 MBps) [2024-11-27T21:55:06.841Z] Copying: 112/1024 [MB] (19 MBps) [2024-11-27T21:55:07.784Z] Copying: 125/1024 [MB] (12 MBps) [2024-11-27T21:55:08.728Z] Copying: 140/1024 [MB] (15 MBps) [2024-11-27T21:55:09.688Z] Copying: 154/1024 [MB] (14 MBps) [2024-11-27T21:55:10.629Z] Copying: 169/1024 [MB] (14 MBps) [2024-11-27T21:55:12.015Z] Copying: 183/1024 [MB] (13 MBps) [2024-11-27T21:55:12.956Z] Copying: 202/1024 [MB] (19 MBps) [2024-11-27T21:55:13.900Z] Copying: 224/1024 [MB] (21 MBps) [2024-11-27T21:55:14.842Z] Copying: 245/1024 [MB] (20 MBps) [2024-11-27T21:55:15.781Z] Copying: 259/1024 [MB] (14 MBps) [2024-11-27T21:55:16.725Z] Copying: 274/1024 [MB] (14 MBps) [2024-11-27T21:55:17.665Z] Copying: 291/1024 [MB] (16 MBps) [2024-11-27T21:55:18.606Z] Copying: 303/1024 [MB] (12 MBps) [2024-11-27T21:55:20.018Z] Copying: 319/1024 [MB] (15 MBps) [2024-11-27T21:55:20.659Z] Copying: 336/1024 [MB] (16 MBps) [2024-11-27T21:55:21.603Z] Copying: 350/1024 [MB] (14 MBps) [2024-11-27T21:55:22.994Z] Copying: 366/1024 [MB] (16 MBps) [2024-11-27T21:55:23.933Z] Copying: 376/1024 [MB] (10 MBps) [2024-11-27T21:55:24.865Z] Copying: 390/1024 [MB] (14 MBps) [2024-11-27T21:55:25.798Z] Copying: 430/1024 [MB] (39 MBps) [2024-11-27T21:55:26.735Z] Copying: 465/1024 [MB] (35 MBps) [2024-11-27T21:55:27.680Z] Copying: 506/1024 [MB] (40 MBps) [2024-11-27T21:55:28.626Z] Copying: 518/1024 [MB] (12 MBps) [2024-11-27T21:55:30.006Z] Copying: 529/1024 [MB] (10 MBps) [2024-11-27T21:55:30.942Z] Copying: 542/1024 [MB] (13 MBps) [2024-11-27T21:55:31.886Z] Copying: 569/1024 [MB] (26 MBps) [2024-11-27T21:55:32.826Z] Copying: 585/1024 [MB] (15 MBps) [2024-11-27T21:55:33.760Z] Copying: 600/1024 [MB] (15 MBps) [2024-11-27T21:55:34.700Z] Copying: 637/1024 [MB] (36 MBps) [2024-11-27T21:55:35.640Z] Copying: 661/1024 [MB] (24 MBps) [2024-11-27T21:55:37.024Z] Copying: 683/1024 [MB] (21 MBps) [2024-11-27T21:55:37.607Z] Copying: 707/1024 [MB] (24 MBps) [2024-11-27T21:55:38.989Z] Copying: 719/1024 [MB] (12 MBps) [2024-11-27T21:55:39.933Z] Copying: 738/1024 [MB] (18 MBps) [2024-11-27T21:55:40.875Z] Copying: 757/1024 [MB] (19 MBps) [2024-11-27T21:55:41.810Z] Copying: 775/1024 [MB] (18 MBps) [2024-11-27T21:55:42.754Z] Copying: 817/1024 [MB] (41 MBps) [2024-11-27T21:55:43.697Z] Copying: 840/1024 [MB] (23 MBps) [2024-11-27T21:55:44.642Z] Copying: 858/1024 [MB] (17 MBps) [2024-11-27T21:55:46.031Z] Copying: 875/1024 [MB] (16 MBps) [2024-11-27T21:55:46.604Z] Copying: 893/1024 [MB] (17 MBps) [2024-11-27T21:55:47.992Z] Copying: 909/1024 [MB] (15 MBps) [2024-11-27T21:55:48.936Z] Copying: 925/1024 [MB] (16 MBps) [2024-11-27T21:55:49.874Z] Copying: 942/1024 [MB] (16 MBps) [2024-11-27T21:55:50.816Z] Copying: 959/1024 [MB] (17 MBps) [2024-11-27T21:55:51.782Z] Copying: 977/1024 [MB] (17 MBps) [2024-11-27T21:55:52.738Z] Copying: 991/1024 [MB] (14 MBps) [2024-11-27T21:55:53.672Z] Copying: 1011/1024 [MB] (20 MBps) [2024-11-27T21:55:53.932Z] Copying: 1023/1024 [MB] (11 MBps) [2024-11-27T21:55:53.932Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-27 21:55:53.677805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.811 [2024-11-27 21:55:53.677924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:30.811 [2024-11-27 21:55:53.677979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:30.811 [2024-11-27 21:55:53.678004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.811 [2024-11-27 21:55:53.680660] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:30.811 [2024-11-27 21:55:53.681880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.811 [2024-11-27 21:55:53.681962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:30.811 [2024-11-27 21:55:53.682014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.129 ms 00:23:30.811 [2024-11-27 21:55:53.682031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.811 [2024-11-27 21:55:53.692139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.811 [2024-11-27 21:55:53.692226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:30.811 [2024-11-27 21:55:53.692271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.527 ms 00:23:30.811 [2024-11-27 21:55:53.692290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.811 [2024-11-27 21:55:53.709011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.811 [2024-11-27 21:55:53.709112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:30.811 [2024-11-27 21:55:53.709125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.698 ms 00:23:30.811 [2024-11-27 21:55:53.709136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.811 [2024-11-27 21:55:53.713967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.811 [2024-11-27 21:55:53.713991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:30.811 [2024-11-27 21:55:53.713999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.810 ms 00:23:30.811 [2024-11-27 21:55:53.714005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.811 [2024-11-27 21:55:53.715912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.811 [2024-11-27 21:55:53.715940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:30.811 [2024-11-27 21:55:53.715948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.876 ms 00:23:30.811 [2024-11-27 21:55:53.715953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.811 [2024-11-27 21:55:53.719621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.811 [2024-11-27 21:55:53.719646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:30.811 [2024-11-27 21:55:53.719654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.645 ms 00:23:30.811 [2024-11-27 21:55:53.719664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.811 [2024-11-27 21:55:53.881300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.811 [2024-11-27 21:55:53.881411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:30.811 [2024-11-27 21:55:53.881425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 161.620 ms 00:23:30.811 [2024-11-27 21:55:53.881431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.811 [2024-11-27 21:55:53.883006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.811 [2024-11-27 21:55:53.883032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:30.811 [2024-11-27 21:55:53.883039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.560 ms 00:23:30.811 [2024-11-27 21:55:53.883045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.811 [2024-11-27 21:55:53.883976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.811 [2024-11-27 21:55:53.884002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:30.811 [2024-11-27 21:55:53.884009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.909 ms 00:23:30.811 [2024-11-27 21:55:53.884014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.811 [2024-11-27 21:55:53.885008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.811 [2024-11-27 21:55:53.885093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:30.811 [2024-11-27 21:55:53.885103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.973 ms 00:23:30.811 [2024-11-27 21:55:53.885109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.811 [2024-11-27 21:55:53.886184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.812 [2024-11-27 21:55:53.886208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:30.812 [2024-11-27 21:55:53.886215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.036 ms 00:23:30.812 [2024-11-27 21:55:53.886220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.812 [2024-11-27 21:55:53.886240] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:30.812 [2024-11-27 21:55:53.886251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 93952 / 261120 wr_cnt: 1 state: open 00:23:30.812 [2024-11-27 21:55:53.886259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:30.812 [2024-11-27 21:55:53.886775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:30.813 [2024-11-27 21:55:53.886781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:30.813 [2024-11-27 21:55:53.886787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:30.813 [2024-11-27 21:55:53.886792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:30.813 [2024-11-27 21:55:53.886798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:30.813 [2024-11-27 21:55:53.886804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:30.813 [2024-11-27 21:55:53.886810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:30.813 [2024-11-27 21:55:53.886816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:30.813 [2024-11-27 21:55:53.886821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:30.813 [2024-11-27 21:55:53.886827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:30.813 [2024-11-27 21:55:53.886833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:30.813 [2024-11-27 21:55:53.886839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:30.813 [2024-11-27 21:55:53.886845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:30.813 [2024-11-27 21:55:53.886851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:30.813 [2024-11-27 21:55:53.886856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:30.813 [2024-11-27 21:55:53.886868] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:30.813 [2024-11-27 21:55:53.886875] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a55aa5c4-5ffd-42dd-9005-29be45156bb9 00:23:30.813 [2024-11-27 21:55:53.886881] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 93952 00:23:30.813 [2024-11-27 21:55:53.886888] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 94912 00:23:30.813 [2024-11-27 21:55:53.886896] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 93952 00:23:30.813 [2024-11-27 21:55:53.886902] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0102 00:23:30.813 [2024-11-27 21:55:53.886907] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:30.813 [2024-11-27 21:55:53.886916] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:30.813 [2024-11-27 21:55:53.886922] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:30.813 [2024-11-27 21:55:53.886927] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:30.813 [2024-11-27 21:55:53.886932] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:30.813 [2024-11-27 21:55:53.886938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.813 [2024-11-27 21:55:53.886944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:30.813 [2024-11-27 21:55:53.886950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.698 ms 00:23:30.813 [2024-11-27 21:55:53.886956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.813 [2024-11-27 21:55:53.888132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.813 [2024-11-27 21:55:53.888145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:30.813 [2024-11-27 21:55:53.888152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.164 ms 00:23:30.813 [2024-11-27 21:55:53.888158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.813 [2024-11-27 21:55:53.888220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.813 [2024-11-27 21:55:53.888226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:30.813 [2024-11-27 21:55:53.888232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:23:30.813 [2024-11-27 21:55:53.888247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.813 [2024-11-27 21:55:53.892379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.813 [2024-11-27 21:55:53.892463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:30.813 [2024-11-27 21:55:53.892518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.813 [2024-11-27 21:55:53.892566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.813 [2024-11-27 21:55:53.892608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.813 [2024-11-27 21:55:53.892618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:30.813 [2024-11-27 21:55:53.892625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.813 [2024-11-27 21:55:53.892632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.813 [2024-11-27 21:55:53.892674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.813 [2024-11-27 21:55:53.892681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:30.813 [2024-11-27 21:55:53.892687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.813 [2024-11-27 21:55:53.892693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.813 [2024-11-27 21:55:53.892704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.813 [2024-11-27 21:55:53.892710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:30.813 [2024-11-27 21:55:53.892715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.813 [2024-11-27 21:55:53.892721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.813 [2024-11-27 21:55:53.900284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.813 [2024-11-27 21:55:53.900446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:30.813 [2024-11-27 21:55:53.900457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.813 [2024-11-27 21:55:53.900463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.813 [2024-11-27 21:55:53.906393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.813 [2024-11-27 21:55:53.906425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:30.813 [2024-11-27 21:55:53.906434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.813 [2024-11-27 21:55:53.906440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.813 [2024-11-27 21:55:53.906463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.813 [2024-11-27 21:55:53.906469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:30.813 [2024-11-27 21:55:53.906476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.813 [2024-11-27 21:55:53.906482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.813 [2024-11-27 21:55:53.906514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.813 [2024-11-27 21:55:53.906521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:30.813 [2024-11-27 21:55:53.906527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.813 [2024-11-27 21:55:53.906532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.813 [2024-11-27 21:55:53.906579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.813 [2024-11-27 21:55:53.906588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:30.813 [2024-11-27 21:55:53.906594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.813 [2024-11-27 21:55:53.906603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.813 [2024-11-27 21:55:53.906628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.813 [2024-11-27 21:55:53.906635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:30.813 [2024-11-27 21:55:53.906644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.813 [2024-11-27 21:55:53.906651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.813 [2024-11-27 21:55:53.906678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.813 [2024-11-27 21:55:53.906689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:30.813 [2024-11-27 21:55:53.906695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.813 [2024-11-27 21:55:53.906701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.813 [2024-11-27 21:55:53.906733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.813 [2024-11-27 21:55:53.906741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:30.813 [2024-11-27 21:55:53.906747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.813 [2024-11-27 21:55:53.906759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.813 [2024-11-27 21:55:53.906846] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 229.915 ms, result 0 00:23:31.756 00:23:31.756 00:23:31.756 21:55:54 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:23:31.756 [2024-11-27 21:55:54.773447] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:23:31.756 [2024-11-27 21:55:54.773596] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90039 ] 00:23:32.017 [2024-11-27 21:55:54.922024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.017 [2024-11-27 21:55:54.950993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.017 [2024-11-27 21:55:55.069649] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:32.017 [2024-11-27 21:55:55.069736] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:32.280 [2024-11-27 21:55:55.231279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.280 [2024-11-27 21:55:55.231368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:32.280 [2024-11-27 21:55:55.231392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:32.280 [2024-11-27 21:55:55.231404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.280 [2024-11-27 21:55:55.231511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.280 [2024-11-27 21:55:55.231527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:32.280 [2024-11-27 21:55:55.231545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:23:32.280 [2024-11-27 21:55:55.231566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.280 [2024-11-27 21:55:55.231609] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:32.280 [2024-11-27 21:55:55.231961] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:32.280 [2024-11-27 21:55:55.231999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.280 [2024-11-27 21:55:55.232018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:32.280 [2024-11-27 21:55:55.232035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:23:32.280 [2024-11-27 21:55:55.232047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.280 [2024-11-27 21:55:55.233827] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:32.280 [2024-11-27 21:55:55.237593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.280 [2024-11-27 21:55:55.237648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:32.280 [2024-11-27 21:55:55.237663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.775 ms 00:23:32.280 [2024-11-27 21:55:55.237685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.280 [2024-11-27 21:55:55.237769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.280 [2024-11-27 21:55:55.237784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:32.280 [2024-11-27 21:55:55.237799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:23:32.280 [2024-11-27 21:55:55.237818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.280 [2024-11-27 21:55:55.245743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.280 [2024-11-27 21:55:55.245797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:32.280 [2024-11-27 21:55:55.245816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.857 ms 00:23:32.280 [2024-11-27 21:55:55.245827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.280 [2024-11-27 21:55:55.245949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.280 [2024-11-27 21:55:55.245968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:32.280 [2024-11-27 21:55:55.245987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:23:32.280 [2024-11-27 21:55:55.246000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.280 [2024-11-27 21:55:55.246082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.280 [2024-11-27 21:55:55.246098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:32.280 [2024-11-27 21:55:55.246112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:32.280 [2024-11-27 21:55:55.246131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.281 [2024-11-27 21:55:55.246166] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:32.281 [2024-11-27 21:55:55.248479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.281 [2024-11-27 21:55:55.248521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:32.281 [2024-11-27 21:55:55.248535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.322 ms 00:23:32.281 [2024-11-27 21:55:55.248547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.281 [2024-11-27 21:55:55.248596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.281 [2024-11-27 21:55:55.248610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:32.281 [2024-11-27 21:55:55.248623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:23:32.281 [2024-11-27 21:55:55.248638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.281 [2024-11-27 21:55:55.248671] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:32.281 [2024-11-27 21:55:55.248706] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:32.281 [2024-11-27 21:55:55.248770] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:32.281 [2024-11-27 21:55:55.248795] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:32.281 [2024-11-27 21:55:55.248947] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:32.281 [2024-11-27 21:55:55.248965] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:32.281 [2024-11-27 21:55:55.248986] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:32.281 [2024-11-27 21:55:55.249011] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:32.281 [2024-11-27 21:55:55.249027] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:32.281 [2024-11-27 21:55:55.249040] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:32.281 [2024-11-27 21:55:55.249054] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:32.281 [2024-11-27 21:55:55.249067] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:32.281 [2024-11-27 21:55:55.249080] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:32.281 [2024-11-27 21:55:55.249093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.281 [2024-11-27 21:55:55.249105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:32.281 [2024-11-27 21:55:55.249119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.426 ms 00:23:32.281 [2024-11-27 21:55:55.249132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.281 [2024-11-27 21:55:55.249266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.281 [2024-11-27 21:55:55.249282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:32.281 [2024-11-27 21:55:55.249295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:23:32.281 [2024-11-27 21:55:55.249307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.281 [2024-11-27 21:55:55.249478] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:32.281 [2024-11-27 21:55:55.249497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:32.281 [2024-11-27 21:55:55.249518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:32.281 [2024-11-27 21:55:55.249532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:32.281 [2024-11-27 21:55:55.249546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:32.281 [2024-11-27 21:55:55.249558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:32.281 [2024-11-27 21:55:55.249571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:32.281 [2024-11-27 21:55:55.249584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:32.281 [2024-11-27 21:55:55.249596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:32.281 [2024-11-27 21:55:55.249607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:32.281 [2024-11-27 21:55:55.249619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:32.281 [2024-11-27 21:55:55.249631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:32.281 [2024-11-27 21:55:55.249642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:32.281 [2024-11-27 21:55:55.249654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:32.281 [2024-11-27 21:55:55.249666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:32.281 [2024-11-27 21:55:55.249678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:32.281 [2024-11-27 21:55:55.249689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:32.281 [2024-11-27 21:55:55.249700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:32.281 [2024-11-27 21:55:55.249717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:32.281 [2024-11-27 21:55:55.249730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:32.281 [2024-11-27 21:55:55.249741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:32.281 [2024-11-27 21:55:55.249753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:32.281 [2024-11-27 21:55:55.249764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:32.281 [2024-11-27 21:55:55.249775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:32.281 [2024-11-27 21:55:55.249786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:32.281 [2024-11-27 21:55:55.249797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:32.281 [2024-11-27 21:55:55.249808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:32.281 [2024-11-27 21:55:55.249819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:32.281 [2024-11-27 21:55:55.249829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:32.281 [2024-11-27 21:55:55.249840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:32.281 [2024-11-27 21:55:55.249852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:32.281 [2024-11-27 21:55:55.249862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:32.281 [2024-11-27 21:55:55.249874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:32.281 [2024-11-27 21:55:55.249886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:32.281 [2024-11-27 21:55:55.249900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:32.281 [2024-11-27 21:55:55.249911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:32.281 [2024-11-27 21:55:55.249922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:32.281 [2024-11-27 21:55:55.249938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:32.281 [2024-11-27 21:55:55.249951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:32.281 [2024-11-27 21:55:55.249962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:32.281 [2024-11-27 21:55:55.249974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:32.281 [2024-11-27 21:55:55.249986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:32.281 [2024-11-27 21:55:55.249997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:32.281 [2024-11-27 21:55:55.250009] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:32.281 [2024-11-27 21:55:55.250025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:32.281 [2024-11-27 21:55:55.250037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:32.281 [2024-11-27 21:55:55.250050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:32.281 [2024-11-27 21:55:55.250068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:32.281 [2024-11-27 21:55:55.250080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:32.281 [2024-11-27 21:55:55.250091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:32.281 [2024-11-27 21:55:55.250106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:32.281 [2024-11-27 21:55:55.250118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:32.281 [2024-11-27 21:55:55.250129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:32.281 [2024-11-27 21:55:55.250143] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:32.281 [2024-11-27 21:55:55.250160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:32.281 [2024-11-27 21:55:55.250175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:32.281 [2024-11-27 21:55:55.250187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:32.281 [2024-11-27 21:55:55.250200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:32.281 [2024-11-27 21:55:55.250213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:32.281 [2024-11-27 21:55:55.250226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:32.281 [2024-11-27 21:55:55.250238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:32.281 [2024-11-27 21:55:55.250250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:32.281 [2024-11-27 21:55:55.250263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:32.281 [2024-11-27 21:55:55.250275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:32.281 [2024-11-27 21:55:55.250296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:32.281 [2024-11-27 21:55:55.250308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:32.281 [2024-11-27 21:55:55.250324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:32.281 [2024-11-27 21:55:55.250673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:32.281 [2024-11-27 21:55:55.250747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:32.282 [2024-11-27 21:55:55.250800] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:32.282 [2024-11-27 21:55:55.250853] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:32.282 [2024-11-27 21:55:55.250975] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:32.282 [2024-11-27 21:55:55.251028] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:32.282 [2024-11-27 21:55:55.251079] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:32.282 [2024-11-27 21:55:55.251129] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:32.282 [2024-11-27 21:55:55.251269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.282 [2024-11-27 21:55:55.251410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:32.282 [2024-11-27 21:55:55.251456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.883 ms 00:23:32.282 [2024-11-27 21:55:55.251501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.282 [2024-11-27 21:55:55.265424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.282 [2024-11-27 21:55:55.265593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:32.282 [2024-11-27 21:55:55.265672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.793 ms 00:23:32.282 [2024-11-27 21:55:55.265709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.282 [2024-11-27 21:55:55.265859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.282 [2024-11-27 21:55:55.265909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:32.282 [2024-11-27 21:55:55.265946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:23:32.282 [2024-11-27 21:55:55.265987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.282 [2024-11-27 21:55:55.286659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.282 [2024-11-27 21:55:55.286883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:32.282 [2024-11-27 21:55:55.286972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.478 ms 00:23:32.282 [2024-11-27 21:55:55.287013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.282 [2024-11-27 21:55:55.287096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.282 [2024-11-27 21:55:55.287156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:32.282 [2024-11-27 21:55:55.287272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:32.282 [2024-11-27 21:55:55.287312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.282 [2024-11-27 21:55:55.287942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.282 [2024-11-27 21:55:55.288093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:32.282 [2024-11-27 21:55:55.288168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.490 ms 00:23:32.282 [2024-11-27 21:55:55.288216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.282 [2024-11-27 21:55:55.288553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.282 [2024-11-27 21:55:55.288754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:32.282 [2024-11-27 21:55:55.288783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:23:32.282 [2024-11-27 21:55:55.288801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.282 [2024-11-27 21:55:55.296600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.282 [2024-11-27 21:55:55.296646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:32.282 [2024-11-27 21:55:55.296661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.753 ms 00:23:32.282 [2024-11-27 21:55:55.296673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.282 [2024-11-27 21:55:55.300551] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:23:32.282 [2024-11-27 21:55:55.300603] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:32.282 [2024-11-27 21:55:55.300626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.282 [2024-11-27 21:55:55.300638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:32.282 [2024-11-27 21:55:55.300651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.829 ms 00:23:32.282 [2024-11-27 21:55:55.300663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.282 [2024-11-27 21:55:55.316525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.282 [2024-11-27 21:55:55.316577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:32.282 [2024-11-27 21:55:55.316596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.800 ms 00:23:32.282 [2024-11-27 21:55:55.316609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.282 [2024-11-27 21:55:55.319552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.282 [2024-11-27 21:55:55.319720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:32.282 [2024-11-27 21:55:55.319742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.899 ms 00:23:32.282 [2024-11-27 21:55:55.319753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.282 [2024-11-27 21:55:55.322113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.282 [2024-11-27 21:55:55.322162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:32.282 [2024-11-27 21:55:55.322177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.310 ms 00:23:32.282 [2024-11-27 21:55:55.322187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.282 [2024-11-27 21:55:55.322656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.282 [2024-11-27 21:55:55.322695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:32.282 [2024-11-27 21:55:55.322711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:23:32.282 [2024-11-27 21:55:55.322731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.282 [2024-11-27 21:55:55.347204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.282 [2024-11-27 21:55:55.347267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:32.282 [2024-11-27 21:55:55.347285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.424 ms 00:23:32.282 [2024-11-27 21:55:55.347297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.282 [2024-11-27 21:55:55.355587] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:32.282 [2024-11-27 21:55:55.358683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.282 [2024-11-27 21:55:55.358730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:32.282 [2024-11-27 21:55:55.358747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.292 ms 00:23:32.282 [2024-11-27 21:55:55.358759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.282 [2024-11-27 21:55:55.358862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.282 [2024-11-27 21:55:55.358880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:32.282 [2024-11-27 21:55:55.358896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:23:32.282 [2024-11-27 21:55:55.358919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.282 [2024-11-27 21:55:55.360701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.282 [2024-11-27 21:55:55.360756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:32.282 [2024-11-27 21:55:55.360772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.720 ms 00:23:32.282 [2024-11-27 21:55:55.360784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.282 [2024-11-27 21:55:55.360832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.282 [2024-11-27 21:55:55.360853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:32.282 [2024-11-27 21:55:55.360866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:32.282 [2024-11-27 21:55:55.360878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.282 [2024-11-27 21:55:55.360931] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:32.282 [2024-11-27 21:55:55.360948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.282 [2024-11-27 21:55:55.360969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:32.282 [2024-11-27 21:55:55.360986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:23:32.282 [2024-11-27 21:55:55.360999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.282 [2024-11-27 21:55:55.366906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.282 [2024-11-27 21:55:55.366964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:32.282 [2024-11-27 21:55:55.366980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.868 ms 00:23:32.282 [2024-11-27 21:55:55.366992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.282 [2024-11-27 21:55:55.367102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.282 [2024-11-27 21:55:55.367118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:32.282 [2024-11-27 21:55:55.367133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:23:32.282 [2024-11-27 21:55:55.367149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.282 [2024-11-27 21:55:55.368527] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 136.743 ms, result 0 00:23:33.665  [2024-11-27T21:55:57.728Z] Copying: 8020/1048576 [kB] (8020 kBps) [2024-11-27T21:55:58.668Z] Copying: 18/1024 [MB] (10 MBps) [2024-11-27T21:55:59.611Z] Copying: 31/1024 [MB] (13 MBps) [2024-11-27T21:56:00.996Z] Copying: 44/1024 [MB] (12 MBps) [2024-11-27T21:56:01.564Z] Copying: 58/1024 [MB] (13 MBps) [2024-11-27T21:56:02.947Z] Copying: 79/1024 [MB] (20 MBps) [2024-11-27T21:56:03.890Z] Copying: 93/1024 [MB] (14 MBps) [2024-11-27T21:56:04.836Z] Copying: 106/1024 [MB] (12 MBps) [2024-11-27T21:56:05.776Z] Copying: 118/1024 [MB] (12 MBps) [2024-11-27T21:56:06.719Z] Copying: 129/1024 [MB] (10 MBps) [2024-11-27T21:56:07.659Z] Copying: 145/1024 [MB] (15 MBps) [2024-11-27T21:56:08.592Z] Copying: 168/1024 [MB] (23 MBps) [2024-11-27T21:56:09.972Z] Copying: 187/1024 [MB] (18 MBps) [2024-11-27T21:56:10.912Z] Copying: 211/1024 [MB] (24 MBps) [2024-11-27T21:56:11.850Z] Copying: 231/1024 [MB] (20 MBps) [2024-11-27T21:56:12.795Z] Copying: 252/1024 [MB] (20 MBps) [2024-11-27T21:56:13.737Z] Copying: 266/1024 [MB] (14 MBps) [2024-11-27T21:56:14.683Z] Copying: 283/1024 [MB] (16 MBps) [2024-11-27T21:56:15.625Z] Copying: 304/1024 [MB] (21 MBps) [2024-11-27T21:56:16.569Z] Copying: 323/1024 [MB] (19 MBps) [2024-11-27T21:56:17.955Z] Copying: 344/1024 [MB] (20 MBps) [2024-11-27T21:56:18.898Z] Copying: 361/1024 [MB] (17 MBps) [2024-11-27T21:56:19.841Z] Copying: 372/1024 [MB] (10 MBps) [2024-11-27T21:56:20.786Z] Copying: 383/1024 [MB] (10 MBps) [2024-11-27T21:56:21.727Z] Copying: 403/1024 [MB] (20 MBps) [2024-11-27T21:56:22.670Z] Copying: 415/1024 [MB] (12 MBps) [2024-11-27T21:56:23.683Z] Copying: 427/1024 [MB] (11 MBps) [2024-11-27T21:56:24.622Z] Copying: 443/1024 [MB] (15 MBps) [2024-11-27T21:56:25.569Z] Copying: 457/1024 [MB] (14 MBps) [2024-11-27T21:56:26.957Z] Copying: 471/1024 [MB] (14 MBps) [2024-11-27T21:56:27.902Z] Copying: 482/1024 [MB] (10 MBps) [2024-11-27T21:56:28.845Z] Copying: 492/1024 [MB] (10 MBps) [2024-11-27T21:56:29.785Z] Copying: 503/1024 [MB] (10 MBps) [2024-11-27T21:56:30.730Z] Copying: 521/1024 [MB] (18 MBps) [2024-11-27T21:56:31.673Z] Copying: 532/1024 [MB] (10 MBps) [2024-11-27T21:56:32.619Z] Copying: 546/1024 [MB] (13 MBps) [2024-11-27T21:56:33.563Z] Copying: 564/1024 [MB] (18 MBps) [2024-11-27T21:56:34.953Z] Copying: 575/1024 [MB] (11 MBps) [2024-11-27T21:56:35.896Z] Copying: 597/1024 [MB] (22 MBps) [2024-11-27T21:56:36.839Z] Copying: 608/1024 [MB] (10 MBps) [2024-11-27T21:56:37.780Z] Copying: 627/1024 [MB] (19 MBps) [2024-11-27T21:56:38.719Z] Copying: 645/1024 [MB] (17 MBps) [2024-11-27T21:56:39.672Z] Copying: 656/1024 [MB] (10 MBps) [2024-11-27T21:56:40.617Z] Copying: 666/1024 [MB] (10 MBps) [2024-11-27T21:56:41.561Z] Copying: 677/1024 [MB] (10 MBps) [2024-11-27T21:56:42.944Z] Copying: 687/1024 [MB] (10 MBps) [2024-11-27T21:56:43.888Z] Copying: 701/1024 [MB] (14 MBps) [2024-11-27T21:56:44.832Z] Copying: 712/1024 [MB] (10 MBps) [2024-11-27T21:56:45.784Z] Copying: 722/1024 [MB] (10 MBps) [2024-11-27T21:56:46.718Z] Copying: 733/1024 [MB] (10 MBps) [2024-11-27T21:56:47.663Z] Copying: 756/1024 [MB] (23 MBps) [2024-11-27T21:56:48.607Z] Copying: 772/1024 [MB] (16 MBps) [2024-11-27T21:56:49.992Z] Copying: 782/1024 [MB] (10 MBps) [2024-11-27T21:56:50.563Z] Copying: 793/1024 [MB] (10 MBps) [2024-11-27T21:56:51.949Z] Copying: 804/1024 [MB] (11 MBps) [2024-11-27T21:56:52.897Z] Copying: 817/1024 [MB] (13 MBps) [2024-11-27T21:56:53.843Z] Copying: 833/1024 [MB] (15 MBps) [2024-11-27T21:56:54.793Z] Copying: 846/1024 [MB] (13 MBps) [2024-11-27T21:56:55.811Z] Copying: 862/1024 [MB] (15 MBps) [2024-11-27T21:56:56.757Z] Copying: 872/1024 [MB] (10 MBps) [2024-11-27T21:56:57.735Z] Copying: 884/1024 [MB] (11 MBps) [2024-11-27T21:56:58.679Z] Copying: 902/1024 [MB] (17 MBps) [2024-11-27T21:56:59.623Z] Copying: 925/1024 [MB] (23 MBps) [2024-11-27T21:57:00.568Z] Copying: 942/1024 [MB] (16 MBps) [2024-11-27T21:57:01.951Z] Copying: 955/1024 [MB] (12 MBps) [2024-11-27T21:57:02.897Z] Copying: 985/1024 [MB] (30 MBps) [2024-11-27T21:57:03.843Z] Copying: 1002/1024 [MB] (17 MBps) [2024-11-27T21:57:03.843Z] Copying: 1021/1024 [MB] (18 MBps) [2024-11-27T21:57:04.105Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-27 21:57:03.877185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.984 [2024-11-27 21:57:03.877595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:40.984 [2024-11-27 21:57:03.877719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:40.984 [2024-11-27 21:57:03.877738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.984 [2024-11-27 21:57:03.877789] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:40.984 [2024-11-27 21:57:03.879839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.984 [2024-11-27 21:57:03.879890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:40.984 [2024-11-27 21:57:03.879914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.018 ms 00:24:40.984 [2024-11-27 21:57:03.879926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.984 [2024-11-27 21:57:03.880266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.984 [2024-11-27 21:57:03.880282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:40.984 [2024-11-27 21:57:03.880295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:24:40.984 [2024-11-27 21:57:03.880307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.984 [2024-11-27 21:57:03.888920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.984 [2024-11-27 21:57:03.889161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:40.984 [2024-11-27 21:57:03.889200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.565 ms 00:24:40.984 [2024-11-27 21:57:03.889214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.984 [2024-11-27 21:57:03.895700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.984 [2024-11-27 21:57:03.895836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:40.984 [2024-11-27 21:57:03.895898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.429 ms 00:24:40.984 [2024-11-27 21:57:03.895922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.984 [2024-11-27 21:57:03.898639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.984 [2024-11-27 21:57:03.898794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:40.984 [2024-11-27 21:57:03.898960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.656 ms 00:24:40.984 [2024-11-27 21:57:03.898999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.984 [2024-11-27 21:57:03.903906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.984 [2024-11-27 21:57:03.904067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:40.984 [2024-11-27 21:57:03.904135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.851 ms 00:24:40.984 [2024-11-27 21:57:03.904167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.984 [2024-11-27 21:57:04.100110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.984 [2024-11-27 21:57:04.100276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:40.984 [2024-11-27 21:57:04.100346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 195.890 ms 00:24:40.984 [2024-11-27 21:57:04.100371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:40.984 [2024-11-27 21:57:04.102841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:40.984 [2024-11-27 21:57:04.102980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:40.984 [2024-11-27 21:57:04.103032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.438 ms 00:24:40.984 [2024-11-27 21:57:04.103053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.247 [2024-11-27 21:57:04.104869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.247 [2024-11-27 21:57:04.105009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:41.248 [2024-11-27 21:57:04.105061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.755 ms 00:24:41.248 [2024-11-27 21:57:04.105082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.248 [2024-11-27 21:57:04.106676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.248 [2024-11-27 21:57:04.106817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:41.248 [2024-11-27 21:57:04.106868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.523 ms 00:24:41.248 [2024-11-27 21:57:04.106890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.248 [2024-11-27 21:57:04.108431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.248 [2024-11-27 21:57:04.108571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:41.248 [2024-11-27 21:57:04.108622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.452 ms 00:24:41.248 [2024-11-27 21:57:04.108643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.248 [2024-11-27 21:57:04.108699] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:41.248 [2024-11-27 21:57:04.108728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:24:41.248 [2024-11-27 21:57:04.108760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.108790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.108859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.108892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.108920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.108949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.109004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.109038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.109066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.109096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.109156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.109189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.109218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.109247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.109328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.109374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.109404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.109433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.109485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.109515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.109543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.109572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.109600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.109628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.109697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.109726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.109756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.109784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.109813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.109892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.109922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.109952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.110009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.110039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.110069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.110120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.110313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.110528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.110637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.110667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.110721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.110754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.110796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.110848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.110878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.110906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.110953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.110963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.110971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.110980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.110988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.110996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:41.248 [2024-11-27 21:57:04.111605] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:41.248 [2024-11-27 21:57:04.111626] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a55aa5c4-5ffd-42dd-9005-29be45156bb9 00:24:41.248 [2024-11-27 21:57:04.111788] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:24:41.248 [2024-11-27 21:57:04.111834] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 38080 00:24:41.248 [2024-11-27 21:57:04.111929] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 37120 00:24:41.248 [2024-11-27 21:57:04.111942] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0259 00:24:41.248 [2024-11-27 21:57:04.111950] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:41.248 [2024-11-27 21:57:04.111959] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:41.248 [2024-11-27 21:57:04.111967] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:41.248 [2024-11-27 21:57:04.111974] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:41.248 [2024-11-27 21:57:04.111980] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:41.248 [2024-11-27 21:57:04.111990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.248 [2024-11-27 21:57:04.111999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:41.248 [2024-11-27 21:57:04.112008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.292 ms 00:24:41.248 [2024-11-27 21:57:04.112016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.248 [2024-11-27 21:57:04.114296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.248 [2024-11-27 21:57:04.114329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:41.248 [2024-11-27 21:57:04.114359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.255 ms 00:24:41.248 [2024-11-27 21:57:04.114369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.248 [2024-11-27 21:57:04.114491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:41.248 [2024-11-27 21:57:04.114506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:41.248 [2024-11-27 21:57:04.114516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:24:41.248 [2024-11-27 21:57:04.114528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.248 [2024-11-27 21:57:04.121846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.248 [2024-11-27 21:57:04.122003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:41.248 [2024-11-27 21:57:04.122020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.248 [2024-11-27 21:57:04.122028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.248 [2024-11-27 21:57:04.122091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.248 [2024-11-27 21:57:04.122101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:41.248 [2024-11-27 21:57:04.122109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.248 [2024-11-27 21:57:04.122117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.248 [2024-11-27 21:57:04.122186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.248 [2024-11-27 21:57:04.122197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:41.248 [2024-11-27 21:57:04.122205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.248 [2024-11-27 21:57:04.122213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.248 [2024-11-27 21:57:04.122227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.248 [2024-11-27 21:57:04.122235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:41.248 [2024-11-27 21:57:04.122249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.248 [2024-11-27 21:57:04.122257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.248 [2024-11-27 21:57:04.135935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.248 [2024-11-27 21:57:04.135995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:41.248 [2024-11-27 21:57:04.136008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.248 [2024-11-27 21:57:04.136016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.248 [2024-11-27 21:57:04.147097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.248 [2024-11-27 21:57:04.147147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:41.248 [2024-11-27 21:57:04.147159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.248 [2024-11-27 21:57:04.147168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.249 [2024-11-27 21:57:04.147229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.249 [2024-11-27 21:57:04.147239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:41.249 [2024-11-27 21:57:04.147248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.249 [2024-11-27 21:57:04.147257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.249 [2024-11-27 21:57:04.147294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.249 [2024-11-27 21:57:04.147304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:41.249 [2024-11-27 21:57:04.147312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.249 [2024-11-27 21:57:04.147321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.249 [2024-11-27 21:57:04.147412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.249 [2024-11-27 21:57:04.147427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:41.249 [2024-11-27 21:57:04.147435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.249 [2024-11-27 21:57:04.147443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.249 [2024-11-27 21:57:04.147479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.249 [2024-11-27 21:57:04.147511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:41.249 [2024-11-27 21:57:04.147520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.249 [2024-11-27 21:57:04.147528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.249 [2024-11-27 21:57:04.147571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.249 [2024-11-27 21:57:04.147584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:41.249 [2024-11-27 21:57:04.147592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.249 [2024-11-27 21:57:04.147601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.249 [2024-11-27 21:57:04.147649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:41.249 [2024-11-27 21:57:04.147660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:41.249 [2024-11-27 21:57:04.147668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:41.249 [2024-11-27 21:57:04.147676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:41.249 [2024-11-27 21:57:04.147820] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 270.613 ms, result 0 00:24:41.249 00:24:41.249 00:24:41.510 21:57:04 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:44.060 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:44.060 21:57:06 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:24:44.060 21:57:06 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:24:44.060 21:57:06 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:44.060 21:57:06 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:44.060 21:57:06 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:44.060 21:57:06 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 87936 00:24:44.060 21:57:06 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 87936 ']' 00:24:44.060 Process with pid 87936 is not found 00:24:44.060 21:57:06 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 87936 00:24:44.060 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (87936) - No such process 00:24:44.060 21:57:06 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 87936 is not found' 00:24:44.060 21:57:06 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:24:44.060 Remove shared memory files 00:24:44.060 21:57:06 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:44.060 21:57:06 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:24:44.060 21:57:06 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:24:44.060 21:57:06 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:24:44.060 21:57:06 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:44.060 21:57:06 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:24:44.060 ************************************ 00:24:44.060 END TEST ftl_restore 00:24:44.060 ************************************ 00:24:44.060 00:24:44.060 real 4m33.032s 00:24:44.060 user 4m21.387s 00:24:44.060 sys 0m11.395s 00:24:44.060 21:57:06 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:44.060 21:57:06 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:24:44.060 21:57:06 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:44.060 21:57:06 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:44.060 21:57:06 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:44.060 21:57:06 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:44.060 ************************************ 00:24:44.060 START TEST ftl_dirty_shutdown 00:24:44.060 ************************************ 00:24:44.060 21:57:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:44.060 * Looking for test storage... 00:24:44.060 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:44.060 21:57:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:44.060 21:57:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:24:44.060 21:57:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:44.060 21:57:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:44.060 21:57:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:44.060 21:57:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:44.060 21:57:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:44.060 21:57:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:24:44.060 21:57:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:24:44.060 21:57:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:24:44.060 21:57:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:24:44.060 21:57:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:24:44.060 21:57:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:24:44.060 21:57:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:24:44.060 21:57:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:44.060 21:57:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:24:44.060 21:57:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:44.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.061 --rc genhtml_branch_coverage=1 00:24:44.061 --rc genhtml_function_coverage=1 00:24:44.061 --rc genhtml_legend=1 00:24:44.061 --rc geninfo_all_blocks=1 00:24:44.061 --rc geninfo_unexecuted_blocks=1 00:24:44.061 00:24:44.061 ' 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:44.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.061 --rc genhtml_branch_coverage=1 00:24:44.061 --rc genhtml_function_coverage=1 00:24:44.061 --rc genhtml_legend=1 00:24:44.061 --rc geninfo_all_blocks=1 00:24:44.061 --rc geninfo_unexecuted_blocks=1 00:24:44.061 00:24:44.061 ' 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:44.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.061 --rc genhtml_branch_coverage=1 00:24:44.061 --rc genhtml_function_coverage=1 00:24:44.061 --rc genhtml_legend=1 00:24:44.061 --rc geninfo_all_blocks=1 00:24:44.061 --rc geninfo_unexecuted_blocks=1 00:24:44.061 00:24:44.061 ' 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:44.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.061 --rc genhtml_branch_coverage=1 00:24:44.061 --rc genhtml_function_coverage=1 00:24:44.061 --rc genhtml_legend=1 00:24:44.061 --rc geninfo_all_blocks=1 00:24:44.061 --rc geninfo_unexecuted_blocks=1 00:24:44.061 00:24:44.061 ' 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=90838 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 90838 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 90838 ']' 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:44.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:44.061 21:57:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:24:44.061 [2024-11-27 21:57:07.087870] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:24:44.061 [2024-11-27 21:57:07.088445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90838 ] 00:24:44.323 [2024-11-27 21:57:07.235402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.323 [2024-11-27 21:57:07.265698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:44.893 21:57:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:44.893 21:57:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:24:44.893 21:57:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:44.893 21:57:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:24:44.893 21:57:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:44.893 21:57:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:24:44.893 21:57:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:24:44.893 21:57:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:45.154 21:57:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:45.154 21:57:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:24:45.154 21:57:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:45.154 21:57:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:24:45.154 21:57:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:45.154 21:57:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:45.154 21:57:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:45.154 21:57:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:45.416 21:57:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:45.416 { 00:24:45.416 "name": "nvme0n1", 00:24:45.416 "aliases": [ 00:24:45.416 "adc50785-f035-48ce-861e-69d7f0c08876" 00:24:45.416 ], 00:24:45.416 "product_name": "NVMe disk", 00:24:45.416 "block_size": 4096, 00:24:45.416 "num_blocks": 1310720, 00:24:45.416 "uuid": "adc50785-f035-48ce-861e-69d7f0c08876", 00:24:45.416 "numa_id": -1, 00:24:45.416 "assigned_rate_limits": { 00:24:45.416 "rw_ios_per_sec": 0, 00:24:45.416 "rw_mbytes_per_sec": 0, 00:24:45.416 "r_mbytes_per_sec": 0, 00:24:45.416 "w_mbytes_per_sec": 0 00:24:45.416 }, 00:24:45.416 "claimed": true, 00:24:45.416 "claim_type": "read_many_write_one", 00:24:45.416 "zoned": false, 00:24:45.416 "supported_io_types": { 00:24:45.416 "read": true, 00:24:45.416 "write": true, 00:24:45.416 "unmap": true, 00:24:45.416 "flush": true, 00:24:45.416 "reset": true, 00:24:45.416 "nvme_admin": true, 00:24:45.416 "nvme_io": true, 00:24:45.416 "nvme_io_md": false, 00:24:45.416 "write_zeroes": true, 00:24:45.416 "zcopy": false, 00:24:45.416 "get_zone_info": false, 00:24:45.416 "zone_management": false, 00:24:45.416 "zone_append": false, 00:24:45.416 "compare": true, 00:24:45.416 "compare_and_write": false, 00:24:45.416 "abort": true, 00:24:45.416 "seek_hole": false, 00:24:45.416 "seek_data": false, 00:24:45.416 "copy": true, 00:24:45.416 "nvme_iov_md": false 00:24:45.416 }, 00:24:45.416 "driver_specific": { 00:24:45.416 "nvme": [ 00:24:45.416 { 00:24:45.416 "pci_address": "0000:00:11.0", 00:24:45.416 "trid": { 00:24:45.416 "trtype": "PCIe", 00:24:45.416 "traddr": "0000:00:11.0" 00:24:45.416 }, 00:24:45.416 "ctrlr_data": { 00:24:45.416 "cntlid": 0, 00:24:45.416 "vendor_id": "0x1b36", 00:24:45.416 "model_number": "QEMU NVMe Ctrl", 00:24:45.416 "serial_number": "12341", 00:24:45.416 "firmware_revision": "8.0.0", 00:24:45.416 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:45.416 "oacs": { 00:24:45.416 "security": 0, 00:24:45.416 "format": 1, 00:24:45.416 "firmware": 0, 00:24:45.416 "ns_manage": 1 00:24:45.416 }, 00:24:45.416 "multi_ctrlr": false, 00:24:45.416 "ana_reporting": false 00:24:45.416 }, 00:24:45.416 "vs": { 00:24:45.416 "nvme_version": "1.4" 00:24:45.416 }, 00:24:45.416 "ns_data": { 00:24:45.416 "id": 1, 00:24:45.416 "can_share": false 00:24:45.416 } 00:24:45.416 } 00:24:45.416 ], 00:24:45.416 "mp_policy": "active_passive" 00:24:45.416 } 00:24:45.416 } 00:24:45.416 ]' 00:24:45.416 21:57:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:45.416 21:57:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:45.416 21:57:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:45.416 21:57:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:24:45.416 21:57:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:24:45.416 21:57:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:24:45.416 21:57:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:24:45.416 21:57:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:45.416 21:57:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:24:45.416 21:57:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:45.416 21:57:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:45.678 21:57:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=3b81d2ba-3c81-42c2-86cf-e81ca1cb2ba4 00:24:45.678 21:57:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:24:45.678 21:57:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3b81d2ba-3c81-42c2-86cf-e81ca1cb2ba4 00:24:45.937 21:57:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:46.196 21:57:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=4cd6e932-b230-4a4c-8502-ff9747bb094d 00:24:46.196 21:57:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 4cd6e932-b230-4a4c-8502-ff9747bb094d 00:24:46.457 21:57:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=8f11f6e6-44d8-4f6b-b1b2-36fbfc13247f 00:24:46.457 21:57:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:24:46.457 21:57:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 8f11f6e6-44d8-4f6b-b1b2-36fbfc13247f 00:24:46.457 21:57:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:24:46.457 21:57:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:46.457 21:57:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=8f11f6e6-44d8-4f6b-b1b2-36fbfc13247f 00:24:46.457 21:57:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:24:46.457 21:57:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 8f11f6e6-44d8-4f6b-b1b2-36fbfc13247f 00:24:46.457 21:57:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=8f11f6e6-44d8-4f6b-b1b2-36fbfc13247f 00:24:46.457 21:57:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:46.457 21:57:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:46.457 21:57:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:46.457 21:57:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8f11f6e6-44d8-4f6b-b1b2-36fbfc13247f 00:24:46.718 21:57:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:46.718 { 00:24:46.718 "name": "8f11f6e6-44d8-4f6b-b1b2-36fbfc13247f", 00:24:46.718 "aliases": [ 00:24:46.718 "lvs/nvme0n1p0" 00:24:46.718 ], 00:24:46.718 "product_name": "Logical Volume", 00:24:46.718 "block_size": 4096, 00:24:46.718 "num_blocks": 26476544, 00:24:46.718 "uuid": "8f11f6e6-44d8-4f6b-b1b2-36fbfc13247f", 00:24:46.718 "assigned_rate_limits": { 00:24:46.718 "rw_ios_per_sec": 0, 00:24:46.718 "rw_mbytes_per_sec": 0, 00:24:46.718 "r_mbytes_per_sec": 0, 00:24:46.718 "w_mbytes_per_sec": 0 00:24:46.718 }, 00:24:46.718 "claimed": false, 00:24:46.718 "zoned": false, 00:24:46.718 "supported_io_types": { 00:24:46.718 "read": true, 00:24:46.718 "write": true, 00:24:46.718 "unmap": true, 00:24:46.718 "flush": false, 00:24:46.718 "reset": true, 00:24:46.718 "nvme_admin": false, 00:24:46.718 "nvme_io": false, 00:24:46.718 "nvme_io_md": false, 00:24:46.718 "write_zeroes": true, 00:24:46.718 "zcopy": false, 00:24:46.718 "get_zone_info": false, 00:24:46.718 "zone_management": false, 00:24:46.718 "zone_append": false, 00:24:46.718 "compare": false, 00:24:46.718 "compare_and_write": false, 00:24:46.718 "abort": false, 00:24:46.718 "seek_hole": true, 00:24:46.718 "seek_data": true, 00:24:46.718 "copy": false, 00:24:46.718 "nvme_iov_md": false 00:24:46.718 }, 00:24:46.718 "driver_specific": { 00:24:46.718 "lvol": { 00:24:46.718 "lvol_store_uuid": "4cd6e932-b230-4a4c-8502-ff9747bb094d", 00:24:46.718 "base_bdev": "nvme0n1", 00:24:46.718 "thin_provision": true, 00:24:46.718 "num_allocated_clusters": 0, 00:24:46.718 "snapshot": false, 00:24:46.718 "clone": false, 00:24:46.718 "esnap_clone": false 00:24:46.718 } 00:24:46.718 } 00:24:46.718 } 00:24:46.718 ]' 00:24:46.718 21:57:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:46.718 21:57:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:46.718 21:57:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:46.718 21:57:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:46.718 21:57:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:46.718 21:57:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:24:46.718 21:57:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:24:46.718 21:57:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:24:46.718 21:57:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:46.979 21:57:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:46.979 21:57:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:46.979 21:57:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 8f11f6e6-44d8-4f6b-b1b2-36fbfc13247f 00:24:46.979 21:57:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=8f11f6e6-44d8-4f6b-b1b2-36fbfc13247f 00:24:46.979 21:57:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:46.979 21:57:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:46.979 21:57:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:46.979 21:57:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8f11f6e6-44d8-4f6b-b1b2-36fbfc13247f 00:24:47.239 21:57:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:47.239 { 00:24:47.239 "name": "8f11f6e6-44d8-4f6b-b1b2-36fbfc13247f", 00:24:47.239 "aliases": [ 00:24:47.239 "lvs/nvme0n1p0" 00:24:47.239 ], 00:24:47.239 "product_name": "Logical Volume", 00:24:47.239 "block_size": 4096, 00:24:47.239 "num_blocks": 26476544, 00:24:47.239 "uuid": "8f11f6e6-44d8-4f6b-b1b2-36fbfc13247f", 00:24:47.239 "assigned_rate_limits": { 00:24:47.239 "rw_ios_per_sec": 0, 00:24:47.239 "rw_mbytes_per_sec": 0, 00:24:47.239 "r_mbytes_per_sec": 0, 00:24:47.239 "w_mbytes_per_sec": 0 00:24:47.239 }, 00:24:47.239 "claimed": false, 00:24:47.239 "zoned": false, 00:24:47.239 "supported_io_types": { 00:24:47.239 "read": true, 00:24:47.239 "write": true, 00:24:47.239 "unmap": true, 00:24:47.239 "flush": false, 00:24:47.239 "reset": true, 00:24:47.239 "nvme_admin": false, 00:24:47.239 "nvme_io": false, 00:24:47.239 "nvme_io_md": false, 00:24:47.239 "write_zeroes": true, 00:24:47.239 "zcopy": false, 00:24:47.239 "get_zone_info": false, 00:24:47.239 "zone_management": false, 00:24:47.239 "zone_append": false, 00:24:47.239 "compare": false, 00:24:47.239 "compare_and_write": false, 00:24:47.239 "abort": false, 00:24:47.239 "seek_hole": true, 00:24:47.239 "seek_data": true, 00:24:47.239 "copy": false, 00:24:47.239 "nvme_iov_md": false 00:24:47.239 }, 00:24:47.239 "driver_specific": { 00:24:47.239 "lvol": { 00:24:47.239 "lvol_store_uuid": "4cd6e932-b230-4a4c-8502-ff9747bb094d", 00:24:47.239 "base_bdev": "nvme0n1", 00:24:47.239 "thin_provision": true, 00:24:47.239 "num_allocated_clusters": 0, 00:24:47.239 "snapshot": false, 00:24:47.239 "clone": false, 00:24:47.239 "esnap_clone": false 00:24:47.239 } 00:24:47.239 } 00:24:47.239 } 00:24:47.239 ]' 00:24:47.239 21:57:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:47.239 21:57:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:47.239 21:57:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:47.239 21:57:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:47.239 21:57:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:47.239 21:57:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:24:47.239 21:57:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:24:47.239 21:57:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:47.498 21:57:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:24:47.498 21:57:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 8f11f6e6-44d8-4f6b-b1b2-36fbfc13247f 00:24:47.498 21:57:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=8f11f6e6-44d8-4f6b-b1b2-36fbfc13247f 00:24:47.498 21:57:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:47.498 21:57:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:47.498 21:57:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:47.498 21:57:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8f11f6e6-44d8-4f6b-b1b2-36fbfc13247f 00:24:47.498 21:57:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:47.498 { 00:24:47.498 "name": "8f11f6e6-44d8-4f6b-b1b2-36fbfc13247f", 00:24:47.498 "aliases": [ 00:24:47.498 "lvs/nvme0n1p0" 00:24:47.498 ], 00:24:47.498 "product_name": "Logical Volume", 00:24:47.498 "block_size": 4096, 00:24:47.498 "num_blocks": 26476544, 00:24:47.498 "uuid": "8f11f6e6-44d8-4f6b-b1b2-36fbfc13247f", 00:24:47.498 "assigned_rate_limits": { 00:24:47.498 "rw_ios_per_sec": 0, 00:24:47.498 "rw_mbytes_per_sec": 0, 00:24:47.498 "r_mbytes_per_sec": 0, 00:24:47.498 "w_mbytes_per_sec": 0 00:24:47.498 }, 00:24:47.498 "claimed": false, 00:24:47.498 "zoned": false, 00:24:47.498 "supported_io_types": { 00:24:47.498 "read": true, 00:24:47.498 "write": true, 00:24:47.498 "unmap": true, 00:24:47.498 "flush": false, 00:24:47.498 "reset": true, 00:24:47.498 "nvme_admin": false, 00:24:47.498 "nvme_io": false, 00:24:47.498 "nvme_io_md": false, 00:24:47.498 "write_zeroes": true, 00:24:47.498 "zcopy": false, 00:24:47.498 "get_zone_info": false, 00:24:47.498 "zone_management": false, 00:24:47.498 "zone_append": false, 00:24:47.498 "compare": false, 00:24:47.498 "compare_and_write": false, 00:24:47.498 "abort": false, 00:24:47.498 "seek_hole": true, 00:24:47.498 "seek_data": true, 00:24:47.498 "copy": false, 00:24:47.498 "nvme_iov_md": false 00:24:47.498 }, 00:24:47.498 "driver_specific": { 00:24:47.498 "lvol": { 00:24:47.498 "lvol_store_uuid": "4cd6e932-b230-4a4c-8502-ff9747bb094d", 00:24:47.498 "base_bdev": "nvme0n1", 00:24:47.498 "thin_provision": true, 00:24:47.498 "num_allocated_clusters": 0, 00:24:47.498 "snapshot": false, 00:24:47.498 "clone": false, 00:24:47.498 "esnap_clone": false 00:24:47.498 } 00:24:47.498 } 00:24:47.498 } 00:24:47.498 ]' 00:24:47.498 21:57:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:47.758 21:57:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:47.758 21:57:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:47.758 21:57:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:47.758 21:57:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:47.758 21:57:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:24:47.758 21:57:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:24:47.758 21:57:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 8f11f6e6-44d8-4f6b-b1b2-36fbfc13247f --l2p_dram_limit 10' 00:24:47.758 21:57:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:24:47.758 21:57:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:24:47.758 21:57:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:24:47.758 21:57:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 8f11f6e6-44d8-4f6b-b1b2-36fbfc13247f --l2p_dram_limit 10 -c nvc0n1p0 00:24:47.758 [2024-11-27 21:57:10.814244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.758 [2024-11-27 21:57:10.814285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:47.758 [2024-11-27 21:57:10.814295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:47.758 [2024-11-27 21:57:10.814303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.758 [2024-11-27 21:57:10.814357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.758 [2024-11-27 21:57:10.814368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:47.758 [2024-11-27 21:57:10.814375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:24:47.758 [2024-11-27 21:57:10.814383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.758 [2024-11-27 21:57:10.814397] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:47.758 [2024-11-27 21:57:10.814601] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:47.758 [2024-11-27 21:57:10.814612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.758 [2024-11-27 21:57:10.814620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:47.758 [2024-11-27 21:57:10.814626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:24:47.758 [2024-11-27 21:57:10.814633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.758 [2024-11-27 21:57:10.814681] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d1938720-a751-43f9-bccf-05588f3ff9ab 00:24:47.758 [2024-11-27 21:57:10.815619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.758 [2024-11-27 21:57:10.815717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:47.758 [2024-11-27 21:57:10.815734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:24:47.758 [2024-11-27 21:57:10.815744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.758 [2024-11-27 21:57:10.820320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.758 [2024-11-27 21:57:10.820351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:47.758 [2024-11-27 21:57:10.820360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.534 ms 00:24:47.758 [2024-11-27 21:57:10.820367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.758 [2024-11-27 21:57:10.820427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.758 [2024-11-27 21:57:10.820434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:47.758 [2024-11-27 21:57:10.820441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:24:47.758 [2024-11-27 21:57:10.820447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.758 [2024-11-27 21:57:10.820484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.758 [2024-11-27 21:57:10.820491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:47.758 [2024-11-27 21:57:10.820499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:47.758 [2024-11-27 21:57:10.820504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.758 [2024-11-27 21:57:10.820522] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:47.758 [2024-11-27 21:57:10.821756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.758 [2024-11-27 21:57:10.821780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:47.758 [2024-11-27 21:57:10.821787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.240 ms 00:24:47.758 [2024-11-27 21:57:10.821794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.758 [2024-11-27 21:57:10.821817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.758 [2024-11-27 21:57:10.821828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:47.758 [2024-11-27 21:57:10.821835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:47.758 [2024-11-27 21:57:10.821846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.758 [2024-11-27 21:57:10.821858] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:47.758 [2024-11-27 21:57:10.821965] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:47.758 [2024-11-27 21:57:10.821974] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:47.758 [2024-11-27 21:57:10.821986] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:47.758 [2024-11-27 21:57:10.821993] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:47.758 [2024-11-27 21:57:10.822005] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:47.758 [2024-11-27 21:57:10.822011] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:47.758 [2024-11-27 21:57:10.822019] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:47.758 [2024-11-27 21:57:10.822025] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:47.758 [2024-11-27 21:57:10.822031] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:47.758 [2024-11-27 21:57:10.822037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.758 [2024-11-27 21:57:10.822044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:47.758 [2024-11-27 21:57:10.822052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:24:47.758 [2024-11-27 21:57:10.822059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.758 [2024-11-27 21:57:10.822122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.758 [2024-11-27 21:57:10.822131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:47.758 [2024-11-27 21:57:10.822136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:24:47.758 [2024-11-27 21:57:10.822145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.758 [2024-11-27 21:57:10.822216] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:47.758 [2024-11-27 21:57:10.822225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:47.758 [2024-11-27 21:57:10.822231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:47.758 [2024-11-27 21:57:10.822238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:47.758 [2024-11-27 21:57:10.822243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:47.758 [2024-11-27 21:57:10.822250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:47.758 [2024-11-27 21:57:10.822254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:47.758 [2024-11-27 21:57:10.822261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:47.758 [2024-11-27 21:57:10.822266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:47.758 [2024-11-27 21:57:10.822273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:47.758 [2024-11-27 21:57:10.822278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:47.758 [2024-11-27 21:57:10.822286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:47.758 [2024-11-27 21:57:10.822290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:47.758 [2024-11-27 21:57:10.822298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:47.758 [2024-11-27 21:57:10.822304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:47.758 [2024-11-27 21:57:10.822310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:47.758 [2024-11-27 21:57:10.822315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:47.758 [2024-11-27 21:57:10.822321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:47.758 [2024-11-27 21:57:10.822326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:47.758 [2024-11-27 21:57:10.822332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:47.758 [2024-11-27 21:57:10.822348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:47.758 [2024-11-27 21:57:10.822355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:47.758 [2024-11-27 21:57:10.822360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:47.758 [2024-11-27 21:57:10.822366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:47.758 [2024-11-27 21:57:10.822370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:47.758 [2024-11-27 21:57:10.822377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:47.758 [2024-11-27 21:57:10.822382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:47.758 [2024-11-27 21:57:10.822388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:47.758 [2024-11-27 21:57:10.822393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:47.758 [2024-11-27 21:57:10.822401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:47.758 [2024-11-27 21:57:10.822408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:47.759 [2024-11-27 21:57:10.822415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:47.759 [2024-11-27 21:57:10.822421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:47.759 [2024-11-27 21:57:10.822428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:47.759 [2024-11-27 21:57:10.822434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:47.759 [2024-11-27 21:57:10.822444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:47.759 [2024-11-27 21:57:10.822449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:47.759 [2024-11-27 21:57:10.822457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:47.759 [2024-11-27 21:57:10.822463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:47.759 [2024-11-27 21:57:10.822470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:47.759 [2024-11-27 21:57:10.822476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:47.759 [2024-11-27 21:57:10.822483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:47.759 [2024-11-27 21:57:10.822488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:47.759 [2024-11-27 21:57:10.822495] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:47.759 [2024-11-27 21:57:10.822505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:47.759 [2024-11-27 21:57:10.822516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:47.759 [2024-11-27 21:57:10.822522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:47.759 [2024-11-27 21:57:10.822532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:47.759 [2024-11-27 21:57:10.822537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:47.759 [2024-11-27 21:57:10.822544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:47.759 [2024-11-27 21:57:10.822550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:47.759 [2024-11-27 21:57:10.822557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:47.759 [2024-11-27 21:57:10.822563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:47.759 [2024-11-27 21:57:10.822574] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:47.759 [2024-11-27 21:57:10.822581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:47.759 [2024-11-27 21:57:10.822591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:47.759 [2024-11-27 21:57:10.822597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:47.759 [2024-11-27 21:57:10.822605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:47.759 [2024-11-27 21:57:10.822611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:47.759 [2024-11-27 21:57:10.822619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:47.759 [2024-11-27 21:57:10.822625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:47.759 [2024-11-27 21:57:10.822634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:47.759 [2024-11-27 21:57:10.822641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:47.759 [2024-11-27 21:57:10.822648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:47.759 [2024-11-27 21:57:10.822654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:47.759 [2024-11-27 21:57:10.822662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:47.759 [2024-11-27 21:57:10.822668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:47.759 [2024-11-27 21:57:10.822676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:47.759 [2024-11-27 21:57:10.822682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:47.759 [2024-11-27 21:57:10.822689] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:47.759 [2024-11-27 21:57:10.822696] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:47.759 [2024-11-27 21:57:10.822704] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:47.759 [2024-11-27 21:57:10.822711] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:47.759 [2024-11-27 21:57:10.822718] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:47.759 [2024-11-27 21:57:10.822725] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:47.759 [2024-11-27 21:57:10.822733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.759 [2024-11-27 21:57:10.822739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:47.759 [2024-11-27 21:57:10.822748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:24:47.759 [2024-11-27 21:57:10.822755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.759 [2024-11-27 21:57:10.822786] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:47.759 [2024-11-27 21:57:10.822793] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:51.961 [2024-11-27 21:57:14.278117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.961 [2024-11-27 21:57:14.278194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:51.961 [2024-11-27 21:57:14.278215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3455.311 ms 00:24:51.961 [2024-11-27 21:57:14.278226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.961 [2024-11-27 21:57:14.292477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.961 [2024-11-27 21:57:14.292523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:51.961 [2024-11-27 21:57:14.292540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.100 ms 00:24:51.961 [2024-11-27 21:57:14.292551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.961 [2024-11-27 21:57:14.292691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.961 [2024-11-27 21:57:14.292702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:51.961 [2024-11-27 21:57:14.292715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:24:51.961 [2024-11-27 21:57:14.292724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.961 [2024-11-27 21:57:14.305019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.961 [2024-11-27 21:57:14.305066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:51.961 [2024-11-27 21:57:14.305080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.227 ms 00:24:51.961 [2024-11-27 21:57:14.305090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.961 [2024-11-27 21:57:14.305151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.961 [2024-11-27 21:57:14.305160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:51.961 [2024-11-27 21:57:14.305172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:51.961 [2024-11-27 21:57:14.305180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.961 [2024-11-27 21:57:14.305725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.961 [2024-11-27 21:57:14.305748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:51.961 [2024-11-27 21:57:14.305762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.490 ms 00:24:51.961 [2024-11-27 21:57:14.305771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.961 [2024-11-27 21:57:14.305906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.961 [2024-11-27 21:57:14.305917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:51.961 [2024-11-27 21:57:14.305929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:24:51.961 [2024-11-27 21:57:14.305938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.961 [2024-11-27 21:57:14.314473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.961 [2024-11-27 21:57:14.314664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:51.961 [2024-11-27 21:57:14.314687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.505 ms 00:24:51.961 [2024-11-27 21:57:14.314696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.961 [2024-11-27 21:57:14.334739] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:51.961 [2024-11-27 21:57:14.339157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.961 [2024-11-27 21:57:14.339217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:51.961 [2024-11-27 21:57:14.339235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.373 ms 00:24:51.961 [2024-11-27 21:57:14.339249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.961 [2024-11-27 21:57:14.422555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.961 [2024-11-27 21:57:14.422630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:51.961 [2024-11-27 21:57:14.422645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.244 ms 00:24:51.961 [2024-11-27 21:57:14.422659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.961 [2024-11-27 21:57:14.422882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.961 [2024-11-27 21:57:14.422901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:51.961 [2024-11-27 21:57:14.422910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:24:51.961 [2024-11-27 21:57:14.422921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.961 [2024-11-27 21:57:14.429132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.961 [2024-11-27 21:57:14.429373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:51.961 [2024-11-27 21:57:14.429402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.186 ms 00:24:51.961 [2024-11-27 21:57:14.429414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.961 [2024-11-27 21:57:14.434720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.961 [2024-11-27 21:57:14.434776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:51.961 [2024-11-27 21:57:14.434788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.260 ms 00:24:51.961 [2024-11-27 21:57:14.434797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.961 [2024-11-27 21:57:14.435146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.961 [2024-11-27 21:57:14.435160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:51.961 [2024-11-27 21:57:14.435169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:24:51.961 [2024-11-27 21:57:14.435182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.961 [2024-11-27 21:57:14.474706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.961 [2024-11-27 21:57:14.474770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:51.961 [2024-11-27 21:57:14.474786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.500 ms 00:24:51.961 [2024-11-27 21:57:14.474796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.961 [2024-11-27 21:57:14.481914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.961 [2024-11-27 21:57:14.481974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:51.961 [2024-11-27 21:57:14.481985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.040 ms 00:24:51.961 [2024-11-27 21:57:14.481997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.961 [2024-11-27 21:57:14.487860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.961 [2024-11-27 21:57:14.487914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:51.961 [2024-11-27 21:57:14.487924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.813 ms 00:24:51.961 [2024-11-27 21:57:14.487934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.961 [2024-11-27 21:57:14.494263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.961 [2024-11-27 21:57:14.494324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:51.961 [2024-11-27 21:57:14.494355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.281 ms 00:24:51.961 [2024-11-27 21:57:14.494369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.961 [2024-11-27 21:57:14.494424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.961 [2024-11-27 21:57:14.494436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:51.961 [2024-11-27 21:57:14.494446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:51.961 [2024-11-27 21:57:14.494457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.961 [2024-11-27 21:57:14.494534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.961 [2024-11-27 21:57:14.494547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:51.961 [2024-11-27 21:57:14.494555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:51.961 [2024-11-27 21:57:14.494568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.961 [2024-11-27 21:57:14.496028] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3681.257 ms, result 0 00:24:51.961 { 00:24:51.961 "name": "ftl0", 00:24:51.961 "uuid": "d1938720-a751-43f9-bccf-05588f3ff9ab" 00:24:51.961 } 00:24:51.961 21:57:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:24:51.961 21:57:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:51.961 21:57:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:24:51.961 21:57:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:24:51.961 21:57:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:24:51.961 /dev/nbd0 00:24:51.961 21:57:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:24:51.961 21:57:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:51.961 21:57:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:24:51.961 21:57:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:51.961 21:57:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:51.961 21:57:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:51.962 21:57:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:24:51.962 21:57:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:51.962 21:57:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:51.962 21:57:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:24:51.962 1+0 records in 00:24:51.962 1+0 records out 00:24:51.962 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336468 s, 12.2 MB/s 00:24:51.962 21:57:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:51.962 21:57:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:24:51.962 21:57:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:51.962 21:57:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:51.962 21:57:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:24:51.962 21:57:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:24:51.962 [2024-11-27 21:57:15.070548] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:24:51.962 [2024-11-27 21:57:15.070873] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90980 ] 00:24:52.223 [2024-11-27 21:57:15.223860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.223 [2024-11-27 21:57:15.252637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:53.613  [2024-11-27T21:57:17.681Z] Copying: 188/1024 [MB] (188 MBps) [2024-11-27T21:57:18.621Z] Copying: 377/1024 [MB] (188 MBps) [2024-11-27T21:57:19.556Z] Copying: 589/1024 [MB] (211 MBps) [2024-11-27T21:57:20.122Z] Copying: 849/1024 [MB] (260 MBps) [2024-11-27T21:57:20.382Z] Copying: 1024/1024 [MB] (average 218 MBps) 00:24:57.261 00:24:57.261 21:57:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:59.176 21:57:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:24:59.176 [2024-11-27 21:57:22.241929] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:24:59.176 [2024-11-27 21:57:22.242523] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91057 ] 00:24:59.433 [2024-11-27 21:57:22.379970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.433 [2024-11-27 21:57:22.396752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:00.367  [2024-11-27T21:57:24.861Z] Copying: 36/1024 [MB] (36 MBps) [2024-11-27T21:57:25.794Z] Copying: 62/1024 [MB] (26 MBps) [2024-11-27T21:57:26.729Z] Copying: 87/1024 [MB] (25 MBps) [2024-11-27T21:57:27.747Z] Copying: 114/1024 [MB] (26 MBps) [2024-11-27T21:57:28.682Z] Copying: 134/1024 [MB] (19 MBps) [2024-11-27T21:57:29.615Z] Copying: 153/1024 [MB] (19 MBps) [2024-11-27T21:57:30.550Z] Copying: 179/1024 [MB] (25 MBps) [2024-11-27T21:57:31.484Z] Copying: 206/1024 [MB] (26 MBps) [2024-11-27T21:57:32.858Z] Copying: 238/1024 [MB] (31 MBps) [2024-11-27T21:57:33.793Z] Copying: 262/1024 [MB] (23 MBps) [2024-11-27T21:57:34.727Z] Copying: 285/1024 [MB] (23 MBps) [2024-11-27T21:57:35.659Z] Copying: 317/1024 [MB] (31 MBps) [2024-11-27T21:57:36.590Z] Copying: 349/1024 [MB] (32 MBps) [2024-11-27T21:57:37.524Z] Copying: 379/1024 [MB] (29 MBps) [2024-11-27T21:57:38.457Z] Copying: 411/1024 [MB] (32 MBps) [2024-11-27T21:57:39.827Z] Copying: 442/1024 [MB] (31 MBps) [2024-11-27T21:57:40.761Z] Copying: 472/1024 [MB] (30 MBps) [2024-11-27T21:57:41.695Z] Copying: 508/1024 [MB] (36 MBps) [2024-11-27T21:57:42.628Z] Copying: 537/1024 [MB] (29 MBps) [2024-11-27T21:57:43.561Z] Copying: 571/1024 [MB] (33 MBps) [2024-11-27T21:57:44.494Z] Copying: 601/1024 [MB] (30 MBps) [2024-11-27T21:57:45.866Z] Copying: 630/1024 [MB] (29 MBps) [2024-11-27T21:57:46.803Z] Copying: 660/1024 [MB] (30 MBps) [2024-11-27T21:57:47.737Z] Copying: 692/1024 [MB] (31 MBps) [2024-11-27T21:57:48.670Z] Copying: 727/1024 [MB] (34 MBps) [2024-11-27T21:57:49.603Z] Copying: 759/1024 [MB] (32 MBps) [2024-11-27T21:57:50.547Z] Copying: 789/1024 [MB] (30 MBps) [2024-11-27T21:57:51.484Z] Copying: 806/1024 [MB] (16 MBps) [2024-11-27T21:57:52.863Z] Copying: 823/1024 [MB] (17 MBps) [2024-11-27T21:57:53.803Z] Copying: 837/1024 [MB] (13 MBps) [2024-11-27T21:57:54.741Z] Copying: 852/1024 [MB] (14 MBps) [2024-11-27T21:57:55.731Z] Copying: 871/1024 [MB] (19 MBps) [2024-11-27T21:57:56.674Z] Copying: 898/1024 [MB] (26 MBps) [2024-11-27T21:57:57.613Z] Copying: 916/1024 [MB] (18 MBps) [2024-11-27T21:57:58.553Z] Copying: 936/1024 [MB] (19 MBps) [2024-11-27T21:57:59.493Z] Copying: 951/1024 [MB] (15 MBps) [2024-11-27T21:58:00.872Z] Copying: 968/1024 [MB] (16 MBps) [2024-11-27T21:58:01.806Z] Copying: 987/1024 [MB] (18 MBps) [2024-11-27T21:58:01.806Z] Copying: 1011/1024 [MB] (23 MBps) [2024-11-27T21:58:02.063Z] Copying: 1024/1024 [MB] (average 26 MBps) 00:25:38.942 00:25:38.942 21:58:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:25:38.942 21:58:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:25:39.200 21:58:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:39.460 [2024-11-27 21:58:02.357976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.460 [2024-11-27 21:58:02.358013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:39.460 [2024-11-27 21:58:02.358024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:39.460 [2024-11-27 21:58:02.358031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.460 [2024-11-27 21:58:02.358051] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:39.460 [2024-11-27 21:58:02.358461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.460 [2024-11-27 21:58:02.358482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:39.460 [2024-11-27 21:58:02.358517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:25:39.460 [2024-11-27 21:58:02.358524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.460 [2024-11-27 21:58:02.360406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.460 [2024-11-27 21:58:02.360434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:39.460 [2024-11-27 21:58:02.360442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.865 ms 00:25:39.460 [2024-11-27 21:58:02.360449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.460 [2024-11-27 21:58:02.373634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.460 [2024-11-27 21:58:02.373748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:39.460 [2024-11-27 21:58:02.373764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.172 ms 00:25:39.460 [2024-11-27 21:58:02.373772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.460 [2024-11-27 21:58:02.378589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.460 [2024-11-27 21:58:02.378615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:39.460 [2024-11-27 21:58:02.378623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.791 ms 00:25:39.460 [2024-11-27 21:58:02.378631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.460 [2024-11-27 21:58:02.379536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.460 [2024-11-27 21:58:02.379567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:39.460 [2024-11-27 21:58:02.379574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.853 ms 00:25:39.460 [2024-11-27 21:58:02.379581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.460 [2024-11-27 21:58:02.383546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.460 [2024-11-27 21:58:02.383577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:39.460 [2024-11-27 21:58:02.383585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.940 ms 00:25:39.460 [2024-11-27 21:58:02.383594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.460 [2024-11-27 21:58:02.383686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.460 [2024-11-27 21:58:02.383695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:39.460 [2024-11-27 21:58:02.383702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:25:39.460 [2024-11-27 21:58:02.383713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.460 [2024-11-27 21:58:02.385355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.460 [2024-11-27 21:58:02.385381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:39.460 [2024-11-27 21:58:02.385389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.629 ms 00:25:39.460 [2024-11-27 21:58:02.385396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.460 [2024-11-27 21:58:02.386411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.460 [2024-11-27 21:58:02.386439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:39.460 [2024-11-27 21:58:02.386446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.990 ms 00:25:39.460 [2024-11-27 21:58:02.386452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.460 [2024-11-27 21:58:02.387258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.460 [2024-11-27 21:58:02.387288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:39.460 [2024-11-27 21:58:02.387294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.782 ms 00:25:39.460 [2024-11-27 21:58:02.387301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.460 [2024-11-27 21:58:02.388265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.460 [2024-11-27 21:58:02.388295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:39.460 [2024-11-27 21:58:02.388302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.909 ms 00:25:39.460 [2024-11-27 21:58:02.388309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.460 [2024-11-27 21:58:02.388332] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:39.460 [2024-11-27 21:58:02.388361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:39.460 [2024-11-27 21:58:02.388369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:39.460 [2024-11-27 21:58:02.388377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:39.460 [2024-11-27 21:58:02.388383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:39.460 [2024-11-27 21:58:02.388402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:39.460 [2024-11-27 21:58:02.388408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:39.460 [2024-11-27 21:58:02.388416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:39.460 [2024-11-27 21:58:02.388423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:39.460 [2024-11-27 21:58:02.388430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:39.460 [2024-11-27 21:58:02.388436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:39.460 [2024-11-27 21:58:02.388443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:39.460 [2024-11-27 21:58:02.388449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:39.460 [2024-11-27 21:58:02.388456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.388998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.389004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.389012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.389018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.389025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.389030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:39.461 [2024-11-27 21:58:02.389044] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:39.461 [2024-11-27 21:58:02.389058] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d1938720-a751-43f9-bccf-05588f3ff9ab 00:25:39.461 [2024-11-27 21:58:02.389066] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:39.462 [2024-11-27 21:58:02.389071] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:39.462 [2024-11-27 21:58:02.389078] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:39.462 [2024-11-27 21:58:02.389084] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:39.462 [2024-11-27 21:58:02.389092] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:39.462 [2024-11-27 21:58:02.389098] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:39.462 [2024-11-27 21:58:02.389105] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:39.462 [2024-11-27 21:58:02.389110] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:39.462 [2024-11-27 21:58:02.389120] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:39.462 [2024-11-27 21:58:02.389125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.462 [2024-11-27 21:58:02.389132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:39.462 [2024-11-27 21:58:02.389140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.794 ms 00:25:39.462 [2024-11-27 21:58:02.389146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.462 [2024-11-27 21:58:02.390400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.462 [2024-11-27 21:58:02.390421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:39.462 [2024-11-27 21:58:02.390431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.241 ms 00:25:39.462 [2024-11-27 21:58:02.390438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.462 [2024-11-27 21:58:02.390517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:39.462 [2024-11-27 21:58:02.390526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:39.462 [2024-11-27 21:58:02.390533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:39.462 [2024-11-27 21:58:02.390540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.462 [2024-11-27 21:58:02.395024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:39.462 [2024-11-27 21:58:02.395115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:39.462 [2024-11-27 21:58:02.395159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:39.462 [2024-11-27 21:58:02.395179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.462 [2024-11-27 21:58:02.395235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:39.462 [2024-11-27 21:58:02.395365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:39.462 [2024-11-27 21:58:02.395388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:39.462 [2024-11-27 21:58:02.395404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.462 [2024-11-27 21:58:02.395458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:39.462 [2024-11-27 21:58:02.395517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:39.462 [2024-11-27 21:58:02.395534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:39.462 [2024-11-27 21:58:02.395550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.462 [2024-11-27 21:58:02.395573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:39.462 [2024-11-27 21:58:02.395590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:39.462 [2024-11-27 21:58:02.395689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:39.462 [2024-11-27 21:58:02.395712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.462 [2024-11-27 21:58:02.403525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:39.462 [2024-11-27 21:58:02.403643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:39.462 [2024-11-27 21:58:02.403685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:39.462 [2024-11-27 21:58:02.403704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.462 [2024-11-27 21:58:02.410110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:39.462 [2024-11-27 21:58:02.410221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:39.462 [2024-11-27 21:58:02.410263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:39.462 [2024-11-27 21:58:02.410283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.462 [2024-11-27 21:58:02.410429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:39.462 [2024-11-27 21:58:02.410461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:39.462 [2024-11-27 21:58:02.410507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:39.462 [2024-11-27 21:58:02.410529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.462 [2024-11-27 21:58:02.410570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:39.462 [2024-11-27 21:58:02.410612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:39.462 [2024-11-27 21:58:02.410630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:39.462 [2024-11-27 21:58:02.410683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.462 [2024-11-27 21:58:02.410748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:39.462 [2024-11-27 21:58:02.410807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:39.462 [2024-11-27 21:58:02.410825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:39.462 [2024-11-27 21:58:02.410841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.462 [2024-11-27 21:58:02.410904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:39.462 [2024-11-27 21:58:02.410953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:39.462 [2024-11-27 21:58:02.410991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:39.462 [2024-11-27 21:58:02.411010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.462 [2024-11-27 21:58:02.411052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:39.462 [2024-11-27 21:58:02.411100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:39.462 [2024-11-27 21:58:02.411117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:39.462 [2024-11-27 21:58:02.411134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.462 [2024-11-27 21:58:02.411204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:39.462 [2024-11-27 21:58:02.411253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:39.462 [2024-11-27 21:58:02.411287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:39.462 [2024-11-27 21:58:02.411307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:39.462 [2024-11-27 21:58:02.411436] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 53.433 ms, result 0 00:25:39.462 true 00:25:39.462 21:58:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 90838 00:25:39.462 21:58:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid90838 00:25:39.462 21:58:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:25:39.462 [2024-11-27 21:58:02.498805] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:25:39.462 [2024-11-27 21:58:02.498929] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91476 ] 00:25:39.723 [2024-11-27 21:58:02.644647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.723 [2024-11-27 21:58:02.666915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.664  [2024-11-27T21:58:05.158Z] Copying: 202/1024 [MB] (202 MBps) [2024-11-27T21:58:06.091Z] Copying: 463/1024 [MB] (260 MBps) [2024-11-27T21:58:07.024Z] Copying: 722/1024 [MB] (258 MBps) [2024-11-27T21:58:07.024Z] Copying: 976/1024 [MB] (254 MBps) [2024-11-27T21:58:07.282Z] Copying: 1024/1024 [MB] (average 244 MBps) 00:25:44.161 00:25:44.161 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 90838 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:25:44.161 21:58:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:44.161 [2024-11-27 21:58:07.105012] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:25:44.161 [2024-11-27 21:58:07.105551] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91530 ] 00:25:44.161 [2024-11-27 21:58:07.247289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.161 [2024-11-27 21:58:07.266104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:44.418 [2024-11-27 21:58:07.349305] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:44.418 [2024-11-27 21:58:07.349376] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:44.418 [2024-11-27 21:58:07.410938] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:25:44.418 [2024-11-27 21:58:07.411363] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:25:44.418 [2024-11-27 21:58:07.411584] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:25:44.676 [2024-11-27 21:58:07.589937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.676 [2024-11-27 21:58:07.590065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:44.676 [2024-11-27 21:58:07.590086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:44.676 [2024-11-27 21:58:07.590105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.676 [2024-11-27 21:58:07.590165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.676 [2024-11-27 21:58:07.590177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:44.676 [2024-11-27 21:58:07.590188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:25:44.676 [2024-11-27 21:58:07.590197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.676 [2024-11-27 21:58:07.590228] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:44.676 [2024-11-27 21:58:07.590528] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:44.676 [2024-11-27 21:58:07.590549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.676 [2024-11-27 21:58:07.590559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:44.676 [2024-11-27 21:58:07.590571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:25:44.676 [2024-11-27 21:58:07.590584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.676 [2024-11-27 21:58:07.591602] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:44.676 [2024-11-27 21:58:07.593681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.676 [2024-11-27 21:58:07.593711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:44.676 [2024-11-27 21:58:07.593722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.081 ms 00:25:44.676 [2024-11-27 21:58:07.593731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.676 [2024-11-27 21:58:07.593786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.676 [2024-11-27 21:58:07.593797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:44.676 [2024-11-27 21:58:07.593808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:25:44.676 [2024-11-27 21:58:07.593819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.676 [2024-11-27 21:58:07.598246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.676 [2024-11-27 21:58:07.598277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:44.676 [2024-11-27 21:58:07.598288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.369 ms 00:25:44.676 [2024-11-27 21:58:07.598296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.676 [2024-11-27 21:58:07.598401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.676 [2024-11-27 21:58:07.598413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:44.676 [2024-11-27 21:58:07.598424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:25:44.676 [2024-11-27 21:58:07.598435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.676 [2024-11-27 21:58:07.598481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.676 [2024-11-27 21:58:07.598495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:44.676 [2024-11-27 21:58:07.598505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:44.676 [2024-11-27 21:58:07.598514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.676 [2024-11-27 21:58:07.598538] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:44.676 [2024-11-27 21:58:07.599770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.676 [2024-11-27 21:58:07.599795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:44.676 [2024-11-27 21:58:07.599806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.237 ms 00:25:44.676 [2024-11-27 21:58:07.599816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.676 [2024-11-27 21:58:07.599848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.676 [2024-11-27 21:58:07.599858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:44.676 [2024-11-27 21:58:07.599868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:44.676 [2024-11-27 21:58:07.599882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.676 [2024-11-27 21:58:07.599907] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:44.676 [2024-11-27 21:58:07.599927] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:44.676 [2024-11-27 21:58:07.599972] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:44.676 [2024-11-27 21:58:07.599995] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:44.676 [2024-11-27 21:58:07.600104] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:44.676 [2024-11-27 21:58:07.600116] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:44.676 [2024-11-27 21:58:07.600129] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:44.676 [2024-11-27 21:58:07.600142] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:44.676 [2024-11-27 21:58:07.600152] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:44.676 [2024-11-27 21:58:07.600162] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:44.676 [2024-11-27 21:58:07.600171] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:44.676 [2024-11-27 21:58:07.600180] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:44.676 [2024-11-27 21:58:07.600192] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:44.676 [2024-11-27 21:58:07.600202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.676 [2024-11-27 21:58:07.600211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:44.676 [2024-11-27 21:58:07.600220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:25:44.676 [2024-11-27 21:58:07.600229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.676 [2024-11-27 21:58:07.600319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.676 [2024-11-27 21:58:07.600332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:44.676 [2024-11-27 21:58:07.600370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:25:44.676 [2024-11-27 21:58:07.600379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.676 [2024-11-27 21:58:07.600507] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:44.676 [2024-11-27 21:58:07.600526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:44.676 [2024-11-27 21:58:07.600542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:44.676 [2024-11-27 21:58:07.600553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:44.676 [2024-11-27 21:58:07.600564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:44.676 [2024-11-27 21:58:07.600576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:44.676 [2024-11-27 21:58:07.600586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:44.676 [2024-11-27 21:58:07.600598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:44.676 [2024-11-27 21:58:07.600608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:44.676 [2024-11-27 21:58:07.600619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:44.676 [2024-11-27 21:58:07.600629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:44.676 [2024-11-27 21:58:07.600639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:44.676 [2024-11-27 21:58:07.600649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:44.676 [2024-11-27 21:58:07.600663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:44.677 [2024-11-27 21:58:07.600674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:44.677 [2024-11-27 21:58:07.600684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:44.677 [2024-11-27 21:58:07.600694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:44.677 [2024-11-27 21:58:07.600705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:44.677 [2024-11-27 21:58:07.600715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:44.677 [2024-11-27 21:58:07.600724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:44.677 [2024-11-27 21:58:07.600735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:44.677 [2024-11-27 21:58:07.600746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:44.677 [2024-11-27 21:58:07.600755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:44.677 [2024-11-27 21:58:07.600765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:44.677 [2024-11-27 21:58:07.600775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:44.677 [2024-11-27 21:58:07.600785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:44.677 [2024-11-27 21:58:07.600795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:44.677 [2024-11-27 21:58:07.600805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:44.677 [2024-11-27 21:58:07.600815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:44.677 [2024-11-27 21:58:07.600829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:44.677 [2024-11-27 21:58:07.600840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:44.677 [2024-11-27 21:58:07.600851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:44.677 [2024-11-27 21:58:07.600860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:44.677 [2024-11-27 21:58:07.600871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:44.677 [2024-11-27 21:58:07.600880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:44.677 [2024-11-27 21:58:07.600890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:44.677 [2024-11-27 21:58:07.600899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:44.677 [2024-11-27 21:58:07.600909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:44.677 [2024-11-27 21:58:07.600918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:44.677 [2024-11-27 21:58:07.600932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:44.677 [2024-11-27 21:58:07.600943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:44.677 [2024-11-27 21:58:07.600952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:44.677 [2024-11-27 21:58:07.600961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:44.677 [2024-11-27 21:58:07.600969] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:44.677 [2024-11-27 21:58:07.600979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:44.677 [2024-11-27 21:58:07.600990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:44.677 [2024-11-27 21:58:07.600999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:44.677 [2024-11-27 21:58:07.601008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:44.677 [2024-11-27 21:58:07.601017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:44.677 [2024-11-27 21:58:07.601026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:44.677 [2024-11-27 21:58:07.601034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:44.677 [2024-11-27 21:58:07.601052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:44.677 [2024-11-27 21:58:07.601061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:44.677 [2024-11-27 21:58:07.601071] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:44.677 [2024-11-27 21:58:07.601083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:44.677 [2024-11-27 21:58:07.601094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:44.677 [2024-11-27 21:58:07.601104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:44.677 [2024-11-27 21:58:07.601113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:44.677 [2024-11-27 21:58:07.601122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:44.677 [2024-11-27 21:58:07.601132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:44.677 [2024-11-27 21:58:07.601146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:44.677 [2024-11-27 21:58:07.601158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:44.677 [2024-11-27 21:58:07.601167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:44.677 [2024-11-27 21:58:07.601177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:44.677 [2024-11-27 21:58:07.601185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:44.677 [2024-11-27 21:58:07.601195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:44.677 [2024-11-27 21:58:07.601204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:44.677 [2024-11-27 21:58:07.601213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:44.677 [2024-11-27 21:58:07.601223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:44.677 [2024-11-27 21:58:07.601232] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:44.677 [2024-11-27 21:58:07.601248] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:44.677 [2024-11-27 21:58:07.601273] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:44.677 [2024-11-27 21:58:07.601283] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:44.677 [2024-11-27 21:58:07.601293] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:44.677 [2024-11-27 21:58:07.601302] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:44.677 [2024-11-27 21:58:07.601312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.677 [2024-11-27 21:58:07.601322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:44.677 [2024-11-27 21:58:07.601346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.893 ms 00:25:44.677 [2024-11-27 21:58:07.601356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.677 [2024-11-27 21:58:07.609346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.677 [2024-11-27 21:58:07.609376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:44.677 [2024-11-27 21:58:07.609389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.931 ms 00:25:44.677 [2024-11-27 21:58:07.609398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.677 [2024-11-27 21:58:07.609477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.677 [2024-11-27 21:58:07.609491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:44.677 [2024-11-27 21:58:07.609500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:25:44.677 [2024-11-27 21:58:07.609509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.677 [2024-11-27 21:58:07.624952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.677 [2024-11-27 21:58:07.624993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:44.677 [2024-11-27 21:58:07.625009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.400 ms 00:25:44.677 [2024-11-27 21:58:07.625021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.677 [2024-11-27 21:58:07.625081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.677 [2024-11-27 21:58:07.625097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:44.677 [2024-11-27 21:58:07.625110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:44.677 [2024-11-27 21:58:07.625121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.677 [2024-11-27 21:58:07.625531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.677 [2024-11-27 21:58:07.625556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:44.677 [2024-11-27 21:58:07.625569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:25:44.677 [2024-11-27 21:58:07.625581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.677 [2024-11-27 21:58:07.625761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.677 [2024-11-27 21:58:07.625793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:44.677 [2024-11-27 21:58:07.625807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:25:44.677 [2024-11-27 21:58:07.625823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.677 [2024-11-27 21:58:07.630861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.677 [2024-11-27 21:58:07.630895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:44.677 [2024-11-27 21:58:07.630910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.008 ms 00:25:44.677 [2024-11-27 21:58:07.630921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.677 [2024-11-27 21:58:07.633124] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:44.677 [2024-11-27 21:58:07.633160] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:44.677 [2024-11-27 21:58:07.633176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.677 [2024-11-27 21:58:07.633191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:44.677 [2024-11-27 21:58:07.633203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.137 ms 00:25:44.677 [2024-11-27 21:58:07.633215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.677 [2024-11-27 21:58:07.646836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.677 [2024-11-27 21:58:07.646869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:44.678 [2024-11-27 21:58:07.646882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.575 ms 00:25:44.678 [2024-11-27 21:58:07.646899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.678 [2024-11-27 21:58:07.648262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.678 [2024-11-27 21:58:07.648371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:44.678 [2024-11-27 21:58:07.648383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.323 ms 00:25:44.678 [2024-11-27 21:58:07.648390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.678 [2024-11-27 21:58:07.650083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.678 [2024-11-27 21:58:07.650113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:44.678 [2024-11-27 21:58:07.650123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.667 ms 00:25:44.678 [2024-11-27 21:58:07.650129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.678 [2024-11-27 21:58:07.650425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.678 [2024-11-27 21:58:07.650447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:44.678 [2024-11-27 21:58:07.650458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:25:44.678 [2024-11-27 21:58:07.650464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.678 [2024-11-27 21:58:07.663818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.678 [2024-11-27 21:58:07.663856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:44.678 [2024-11-27 21:58:07.663866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.340 ms 00:25:44.678 [2024-11-27 21:58:07.663872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.678 [2024-11-27 21:58:07.669677] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:44.678 [2024-11-27 21:58:07.671625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.678 [2024-11-27 21:58:07.671650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:44.678 [2024-11-27 21:58:07.671666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.722 ms 00:25:44.678 [2024-11-27 21:58:07.671673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.678 [2024-11-27 21:58:07.671713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.678 [2024-11-27 21:58:07.671721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:44.678 [2024-11-27 21:58:07.671732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:44.678 [2024-11-27 21:58:07.671738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.678 [2024-11-27 21:58:07.671790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.678 [2024-11-27 21:58:07.671801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:44.678 [2024-11-27 21:58:07.671808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:25:44.678 [2024-11-27 21:58:07.671814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.678 [2024-11-27 21:58:07.671829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.678 [2024-11-27 21:58:07.671838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:44.678 [2024-11-27 21:58:07.671845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:44.678 [2024-11-27 21:58:07.671853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.678 [2024-11-27 21:58:07.671880] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:44.678 [2024-11-27 21:58:07.671888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.678 [2024-11-27 21:58:07.671895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:44.678 [2024-11-27 21:58:07.671901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:44.678 [2024-11-27 21:58:07.671910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.678 [2024-11-27 21:58:07.674770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.678 [2024-11-27 21:58:07.674799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:44.678 [2024-11-27 21:58:07.674807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.847 ms 00:25:44.678 [2024-11-27 21:58:07.674819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.678 [2024-11-27 21:58:07.674875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.678 [2024-11-27 21:58:07.674883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:44.678 [2024-11-27 21:58:07.674889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:25:44.678 [2024-11-27 21:58:07.674895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.678 [2024-11-27 21:58:07.675707] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 85.454 ms, result 0 00:25:45.619  [2024-11-27T21:58:10.127Z] Copying: 21/1024 [MB] (21 MBps) [2024-11-27T21:58:10.699Z] Copying: 42/1024 [MB] (21 MBps) [2024-11-27T21:58:12.085Z] Copying: 64/1024 [MB] (21 MBps) [2024-11-27T21:58:13.032Z] Copying: 83/1024 [MB] (19 MBps) [2024-11-27T21:58:13.978Z] Copying: 95/1024 [MB] (11 MBps) [2024-11-27T21:58:14.940Z] Copying: 106/1024 [MB] (11 MBps) [2024-11-27T21:58:15.885Z] Copying: 118/1024 [MB] (12 MBps) [2024-11-27T21:58:16.830Z] Copying: 133/1024 [MB] (15 MBps) [2024-11-27T21:58:17.772Z] Copying: 144/1024 [MB] (10 MBps) [2024-11-27T21:58:18.706Z] Copying: 158/1024 [MB] (13 MBps) [2024-11-27T21:58:20.094Z] Copying: 195/1024 [MB] (36 MBps) [2024-11-27T21:58:21.033Z] Copying: 205/1024 [MB] (10 MBps) [2024-11-27T21:58:21.974Z] Copying: 224/1024 [MB] (19 MBps) [2024-11-27T21:58:22.918Z] Copying: 269/1024 [MB] (44 MBps) [2024-11-27T21:58:23.932Z] Copying: 284/1024 [MB] (15 MBps) [2024-11-27T21:58:24.877Z] Copying: 297/1024 [MB] (12 MBps) [2024-11-27T21:58:25.816Z] Copying: 312/1024 [MB] (15 MBps) [2024-11-27T21:58:26.760Z] Copying: 343/1024 [MB] (30 MBps) [2024-11-27T21:58:27.706Z] Copying: 353/1024 [MB] (10 MBps) [2024-11-27T21:58:29.095Z] Copying: 365/1024 [MB] (12 MBps) [2024-11-27T21:58:30.050Z] Copying: 377/1024 [MB] (12 MBps) [2024-11-27T21:58:30.993Z] Copying: 391/1024 [MB] (14 MBps) [2024-11-27T21:58:31.938Z] Copying: 406/1024 [MB] (14 MBps) [2024-11-27T21:58:32.883Z] Copying: 424/1024 [MB] (18 MBps) [2024-11-27T21:58:33.821Z] Copying: 440/1024 [MB] (16 MBps) [2024-11-27T21:58:34.753Z] Copying: 467/1024 [MB] (26 MBps) [2024-11-27T21:58:36.124Z] Copying: 504/1024 [MB] (37 MBps) [2024-11-27T21:58:36.688Z] Copying: 540/1024 [MB] (35 MBps) [2024-11-27T21:58:38.061Z] Copying: 576/1024 [MB] (35 MBps) [2024-11-27T21:58:39.000Z] Copying: 609/1024 [MB] (33 MBps) [2024-11-27T21:58:39.944Z] Copying: 641/1024 [MB] (31 MBps) [2024-11-27T21:58:40.888Z] Copying: 655/1024 [MB] (14 MBps) [2024-11-27T21:58:41.827Z] Copying: 666/1024 [MB] (10 MBps) [2024-11-27T21:58:42.770Z] Copying: 688/1024 [MB] (21 MBps) [2024-11-27T21:58:43.712Z] Copying: 715/1024 [MB] (27 MBps) [2024-11-27T21:58:45.086Z] Copying: 731/1024 [MB] (16 MBps) [2024-11-27T21:58:46.031Z] Copying: 760/1024 [MB] (29 MBps) [2024-11-27T21:58:46.975Z] Copying: 797/1024 [MB] (36 MBps) [2024-11-27T21:58:47.922Z] Copying: 809/1024 [MB] (12 MBps) [2024-11-27T21:58:48.864Z] Copying: 826/1024 [MB] (16 MBps) [2024-11-27T21:58:49.817Z] Copying: 844/1024 [MB] (17 MBps) [2024-11-27T21:58:50.758Z] Copying: 880/1024 [MB] (36 MBps) [2024-11-27T21:58:51.693Z] Copying: 901/1024 [MB] (20 MBps) [2024-11-27T21:58:52.708Z] Copying: 929/1024 [MB] (27 MBps) [2024-11-27T21:58:54.128Z] Copying: 974/1024 [MB] (45 MBps) [2024-11-27T21:58:54.702Z] Copying: 1004/1024 [MB] (30 MBps) [2024-11-27T21:58:55.647Z] Copying: 1023/1024 [MB] (18 MBps) [2024-11-27T21:58:55.647Z] Copying: 1024/1024 [MB] (average 21 MBps)[2024-11-27 21:58:55.515655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.526 [2024-11-27 21:58:55.515730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:32.526 [2024-11-27 21:58:55.515749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:32.526 [2024-11-27 21:58:55.515758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.526 [2024-11-27 21:58:55.519056] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:32.526 [2024-11-27 21:58:55.522708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.526 [2024-11-27 21:58:55.522763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:32.526 [2024-11-27 21:58:55.522776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.601 ms 00:26:32.526 [2024-11-27 21:58:55.522786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.526 [2024-11-27 21:58:55.534194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.526 [2024-11-27 21:58:55.534238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:32.526 [2024-11-27 21:58:55.534251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.300 ms 00:26:32.526 [2024-11-27 21:58:55.534260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.526 [2024-11-27 21:58:55.557455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.526 [2024-11-27 21:58:55.557497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:32.526 [2024-11-27 21:58:55.557509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.169 ms 00:26:32.526 [2024-11-27 21:58:55.557517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.526 [2024-11-27 21:58:55.563668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.526 [2024-11-27 21:58:55.563704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:32.526 [2024-11-27 21:58:55.563716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.113 ms 00:26:32.526 [2024-11-27 21:58:55.563724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.526 [2024-11-27 21:58:55.566484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.526 [2024-11-27 21:58:55.566526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:32.526 [2024-11-27 21:58:55.566538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.705 ms 00:26:32.526 [2024-11-27 21:58:55.566545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.526 [2024-11-27 21:58:55.571117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.526 [2024-11-27 21:58:55.571162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:32.526 [2024-11-27 21:58:55.571173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.530 ms 00:26:32.526 [2024-11-27 21:58:55.571182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.788 [2024-11-27 21:58:55.758081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.788 [2024-11-27 21:58:55.758138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:32.788 [2024-11-27 21:58:55.758149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 186.855 ms 00:26:32.788 [2024-11-27 21:58:55.758157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.788 [2024-11-27 21:58:55.760975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.788 [2024-11-27 21:58:55.761043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:32.788 [2024-11-27 21:58:55.761053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.800 ms 00:26:32.788 [2024-11-27 21:58:55.761060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.788 [2024-11-27 21:58:55.763104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.788 [2024-11-27 21:58:55.763145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:32.788 [2024-11-27 21:58:55.763154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.004 ms 00:26:32.788 [2024-11-27 21:58:55.763162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.788 [2024-11-27 21:58:55.764786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.788 [2024-11-27 21:58:55.764825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:32.788 [2024-11-27 21:58:55.764834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.584 ms 00:26:32.788 [2024-11-27 21:58:55.764842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.788 [2024-11-27 21:58:55.766589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.788 [2024-11-27 21:58:55.766629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:32.788 [2024-11-27 21:58:55.766638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.681 ms 00:26:32.788 [2024-11-27 21:58:55.766645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.788 [2024-11-27 21:58:55.766683] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:32.788 [2024-11-27 21:58:55.766704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 105984 / 261120 wr_cnt: 1 state: open 00:26:32.788 [2024-11-27 21:58:55.766721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:32.788 [2024-11-27 21:58:55.766730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:32.788 [2024-11-27 21:58:55.766738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:32.788 [2024-11-27 21:58:55.766747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:32.788 [2024-11-27 21:58:55.766755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:32.788 [2024-11-27 21:58:55.766763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:32.788 [2024-11-27 21:58:55.766770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:32.788 [2024-11-27 21:58:55.766778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:32.788 [2024-11-27 21:58:55.766786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:32.788 [2024-11-27 21:58:55.766793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:32.788 [2024-11-27 21:58:55.766801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:32.788 [2024-11-27 21:58:55.766808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:32.788 [2024-11-27 21:58:55.766817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:32.788 [2024-11-27 21:58:55.766825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:32.788 [2024-11-27 21:58:55.766833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:32.788 [2024-11-27 21:58:55.766841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:32.788 [2024-11-27 21:58:55.766848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.766855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.766862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.766870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.766877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.766884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.766892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.766906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.766914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.766923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.766930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.766939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.766947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.766958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.766966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.766973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.766981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.766989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.766997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:32.789 [2024-11-27 21:58:55.767546] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:32.789 [2024-11-27 21:58:55.767558] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d1938720-a751-43f9-bccf-05588f3ff9ab 00:26:32.789 [2024-11-27 21:58:55.767566] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 105984 00:26:32.789 [2024-11-27 21:58:55.767574] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 106944 00:26:32.789 [2024-11-27 21:58:55.767584] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 105984 00:26:32.789 [2024-11-27 21:58:55.767596] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0091 00:26:32.789 [2024-11-27 21:58:55.767604] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:32.789 [2024-11-27 21:58:55.767612] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:32.789 [2024-11-27 21:58:55.767620] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:32.789 [2024-11-27 21:58:55.767626] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:32.789 [2024-11-27 21:58:55.767632] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:32.790 [2024-11-27 21:58:55.767639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.790 [2024-11-27 21:58:55.767647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:32.790 [2024-11-27 21:58:55.767661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.957 ms 00:26:32.790 [2024-11-27 21:58:55.767669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.790 [2024-11-27 21:58:55.769977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.790 [2024-11-27 21:58:55.770002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:32.790 [2024-11-27 21:58:55.770016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.291 ms 00:26:32.790 [2024-11-27 21:58:55.770025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.790 [2024-11-27 21:58:55.770158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:32.790 [2024-11-27 21:58:55.770170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:32.790 [2024-11-27 21:58:55.770179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:26:32.790 [2024-11-27 21:58:55.770188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.790 [2024-11-27 21:58:55.777586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.790 [2024-11-27 21:58:55.777629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:32.790 [2024-11-27 21:58:55.777646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.790 [2024-11-27 21:58:55.777653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.790 [2024-11-27 21:58:55.777713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.790 [2024-11-27 21:58:55.777722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:32.790 [2024-11-27 21:58:55.777732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.790 [2024-11-27 21:58:55.777740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.790 [2024-11-27 21:58:55.777790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.790 [2024-11-27 21:58:55.777800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:32.790 [2024-11-27 21:58:55.777809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.790 [2024-11-27 21:58:55.777817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.790 [2024-11-27 21:58:55.777851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.790 [2024-11-27 21:58:55.777865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:32.790 [2024-11-27 21:58:55.777876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.790 [2024-11-27 21:58:55.777883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.790 [2024-11-27 21:58:55.791083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.790 [2024-11-27 21:58:55.791129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:32.790 [2024-11-27 21:58:55.791140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.790 [2024-11-27 21:58:55.791149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.790 [2024-11-27 21:58:55.800837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.790 [2024-11-27 21:58:55.800879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:32.790 [2024-11-27 21:58:55.800891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.790 [2024-11-27 21:58:55.800899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.790 [2024-11-27 21:58:55.800950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.790 [2024-11-27 21:58:55.800960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:32.790 [2024-11-27 21:58:55.800968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.790 [2024-11-27 21:58:55.800977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.790 [2024-11-27 21:58:55.801040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.790 [2024-11-27 21:58:55.801050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:32.790 [2024-11-27 21:58:55.801062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.790 [2024-11-27 21:58:55.801071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.790 [2024-11-27 21:58:55.801140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.790 [2024-11-27 21:58:55.801150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:32.790 [2024-11-27 21:58:55.801164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.790 [2024-11-27 21:58:55.801172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.790 [2024-11-27 21:58:55.801203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.790 [2024-11-27 21:58:55.801212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:32.790 [2024-11-27 21:58:55.801220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.790 [2024-11-27 21:58:55.801232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.790 [2024-11-27 21:58:55.801283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.790 [2024-11-27 21:58:55.801298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:32.790 [2024-11-27 21:58:55.801306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.790 [2024-11-27 21:58:55.801321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.790 [2024-11-27 21:58:55.801432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.790 [2024-11-27 21:58:55.801444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:32.790 [2024-11-27 21:58:55.801456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.790 [2024-11-27 21:58:55.801465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.790 [2024-11-27 21:58:55.801607] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 287.903 ms, result 0 00:26:33.731 00:26:33.731 00:26:33.731 21:58:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:36.291 21:58:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:36.291 [2024-11-27 21:58:58.856198] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:26:36.291 [2024-11-27 21:58:58.856380] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92061 ] 00:26:36.291 [2024-11-27 21:58:59.005296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.291 [2024-11-27 21:58:59.034022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:36.291 [2024-11-27 21:58:59.151484] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:36.291 [2024-11-27 21:58:59.151570] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:36.291 [2024-11-27 21:58:59.313312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.291 [2024-11-27 21:58:59.313388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:36.291 [2024-11-27 21:58:59.313407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:36.291 [2024-11-27 21:58:59.313417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.291 [2024-11-27 21:58:59.313479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.291 [2024-11-27 21:58:59.313489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:36.291 [2024-11-27 21:58:59.313499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:26:36.291 [2024-11-27 21:58:59.313516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.291 [2024-11-27 21:58:59.313545] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:36.291 [2024-11-27 21:58:59.313881] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:36.291 [2024-11-27 21:58:59.313913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.291 [2024-11-27 21:58:59.313923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:36.291 [2024-11-27 21:58:59.313935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:26:36.291 [2024-11-27 21:58:59.313943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.291 [2024-11-27 21:58:59.315781] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:36.291 [2024-11-27 21:58:59.319576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.291 [2024-11-27 21:58:59.319627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:36.291 [2024-11-27 21:58:59.319640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.797 ms 00:26:36.291 [2024-11-27 21:58:59.319659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.291 [2024-11-27 21:58:59.319740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.291 [2024-11-27 21:58:59.319750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:36.291 [2024-11-27 21:58:59.319759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:26:36.291 [2024-11-27 21:58:59.319768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.291 [2024-11-27 21:58:59.328002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.291 [2024-11-27 21:58:59.328045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:36.291 [2024-11-27 21:58:59.328059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.190 ms 00:26:36.291 [2024-11-27 21:58:59.328067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.291 [2024-11-27 21:58:59.328173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.291 [2024-11-27 21:58:59.328184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:36.291 [2024-11-27 21:58:59.328194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:26:36.291 [2024-11-27 21:58:59.328201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.291 [2024-11-27 21:58:59.328259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.291 [2024-11-27 21:58:59.328271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:36.291 [2024-11-27 21:58:59.328279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:36.291 [2024-11-27 21:58:59.328290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.291 [2024-11-27 21:58:59.328324] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:36.291 [2024-11-27 21:58:59.330447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.291 [2024-11-27 21:58:59.330489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:36.291 [2024-11-27 21:58:59.330504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.136 ms 00:26:36.291 [2024-11-27 21:58:59.330512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.291 [2024-11-27 21:58:59.330552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.291 [2024-11-27 21:58:59.330560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:36.291 [2024-11-27 21:58:59.330573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:36.291 [2024-11-27 21:58:59.330584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.291 [2024-11-27 21:58:59.330606] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:36.291 [2024-11-27 21:58:59.330629] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:36.291 [2024-11-27 21:58:59.330674] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:36.291 [2024-11-27 21:58:59.330696] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:36.291 [2024-11-27 21:58:59.330803] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:36.291 [2024-11-27 21:58:59.330822] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:36.291 [2024-11-27 21:58:59.330836] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:36.291 [2024-11-27 21:58:59.330847] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:36.291 [2024-11-27 21:58:59.330856] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:36.291 [2024-11-27 21:58:59.330869] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:36.291 [2024-11-27 21:58:59.330878] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:36.291 [2024-11-27 21:58:59.330887] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:36.291 [2024-11-27 21:58:59.330896] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:36.291 [2024-11-27 21:58:59.330905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.291 [2024-11-27 21:58:59.330914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:36.291 [2024-11-27 21:58:59.330923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:26:36.291 [2024-11-27 21:58:59.330930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.291 [2024-11-27 21:58:59.331016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.291 [2024-11-27 21:58:59.331031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:36.291 [2024-11-27 21:58:59.331040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:26:36.291 [2024-11-27 21:58:59.331047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.291 [2024-11-27 21:58:59.331153] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:36.291 [2024-11-27 21:58:59.331176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:36.291 [2024-11-27 21:58:59.331186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:36.291 [2024-11-27 21:58:59.331195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:36.291 [2024-11-27 21:58:59.331209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:36.291 [2024-11-27 21:58:59.331218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:36.291 [2024-11-27 21:58:59.331226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:36.291 [2024-11-27 21:58:59.331234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:36.291 [2024-11-27 21:58:59.331242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:36.292 [2024-11-27 21:58:59.331251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:36.292 [2024-11-27 21:58:59.331259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:36.292 [2024-11-27 21:58:59.331267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:36.292 [2024-11-27 21:58:59.331276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:36.292 [2024-11-27 21:58:59.331284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:36.292 [2024-11-27 21:58:59.331292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:36.292 [2024-11-27 21:58:59.331302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:36.292 [2024-11-27 21:58:59.331314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:36.292 [2024-11-27 21:58:59.331323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:36.292 [2024-11-27 21:58:59.331331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:36.292 [2024-11-27 21:58:59.331356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:36.292 [2024-11-27 21:58:59.331365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:36.292 [2024-11-27 21:58:59.331374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:36.292 [2024-11-27 21:58:59.331384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:36.292 [2024-11-27 21:58:59.331392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:36.292 [2024-11-27 21:58:59.331400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:36.292 [2024-11-27 21:58:59.331407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:36.292 [2024-11-27 21:58:59.331415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:36.292 [2024-11-27 21:58:59.331422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:36.292 [2024-11-27 21:58:59.331430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:36.292 [2024-11-27 21:58:59.331437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:36.292 [2024-11-27 21:58:59.331445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:36.292 [2024-11-27 21:58:59.331453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:36.292 [2024-11-27 21:58:59.331463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:36.292 [2024-11-27 21:58:59.331470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:36.292 [2024-11-27 21:58:59.331477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:36.292 [2024-11-27 21:58:59.331484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:36.292 [2024-11-27 21:58:59.331490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:36.292 [2024-11-27 21:58:59.331496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:36.292 [2024-11-27 21:58:59.331503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:36.292 [2024-11-27 21:58:59.331510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:36.292 [2024-11-27 21:58:59.331516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:36.292 [2024-11-27 21:58:59.331522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:36.292 [2024-11-27 21:58:59.331529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:36.292 [2024-11-27 21:58:59.331536] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:36.292 [2024-11-27 21:58:59.331547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:36.292 [2024-11-27 21:58:59.331556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:36.292 [2024-11-27 21:58:59.331568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:36.292 [2024-11-27 21:58:59.331578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:36.292 [2024-11-27 21:58:59.331587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:36.292 [2024-11-27 21:58:59.331595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:36.292 [2024-11-27 21:58:59.331602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:36.292 [2024-11-27 21:58:59.331610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:36.292 [2024-11-27 21:58:59.331621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:36.292 [2024-11-27 21:58:59.331630] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:36.292 [2024-11-27 21:58:59.331639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:36.292 [2024-11-27 21:58:59.331648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:36.292 [2024-11-27 21:58:59.331655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:36.292 [2024-11-27 21:58:59.331663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:36.292 [2024-11-27 21:58:59.331670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:36.292 [2024-11-27 21:58:59.331677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:36.292 [2024-11-27 21:58:59.331684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:36.292 [2024-11-27 21:58:59.331691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:36.292 [2024-11-27 21:58:59.331698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:36.292 [2024-11-27 21:58:59.331705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:36.292 [2024-11-27 21:58:59.331722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:36.292 [2024-11-27 21:58:59.331731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:36.292 [2024-11-27 21:58:59.331739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:36.292 [2024-11-27 21:58:59.331746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:36.292 [2024-11-27 21:58:59.331755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:36.292 [2024-11-27 21:58:59.331765] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:36.292 [2024-11-27 21:58:59.331774] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:36.292 [2024-11-27 21:58:59.331783] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:36.292 [2024-11-27 21:58:59.331793] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:36.292 [2024-11-27 21:58:59.331801] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:36.292 [2024-11-27 21:58:59.331809] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:36.292 [2024-11-27 21:58:59.331817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.292 [2024-11-27 21:58:59.331825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:36.292 [2024-11-27 21:58:59.331834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.737 ms 00:26:36.292 [2024-11-27 21:58:59.331845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.292 [2024-11-27 21:58:59.345824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.292 [2024-11-27 21:58:59.345870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:36.292 [2024-11-27 21:58:59.345889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.930 ms 00:26:36.292 [2024-11-27 21:58:59.345900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.292 [2024-11-27 21:58:59.345991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.292 [2024-11-27 21:58:59.346001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:36.292 [2024-11-27 21:58:59.346014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:26:36.292 [2024-11-27 21:58:59.346023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.292 [2024-11-27 21:58:59.369209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.292 [2024-11-27 21:58:59.369284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:36.292 [2024-11-27 21:58:59.369304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.126 ms 00:26:36.292 [2024-11-27 21:58:59.369318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.292 [2024-11-27 21:58:59.369403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.293 [2024-11-27 21:58:59.369431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:36.293 [2024-11-27 21:58:59.369447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:36.293 [2024-11-27 21:58:59.369459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.293 [2024-11-27 21:58:59.370118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.293 [2024-11-27 21:58:59.370182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:36.293 [2024-11-27 21:58:59.370200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.568 ms 00:26:36.293 [2024-11-27 21:58:59.370220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.293 [2024-11-27 21:58:59.370449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.293 [2024-11-27 21:58:59.370466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:36.293 [2024-11-27 21:58:59.370481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:26:36.293 [2024-11-27 21:58:59.370495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.293 [2024-11-27 21:58:59.378721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.293 [2024-11-27 21:58:59.378770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:36.293 [2024-11-27 21:58:59.378781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.195 ms 00:26:36.293 [2024-11-27 21:58:59.378800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.293 [2024-11-27 21:58:59.382613] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:26:36.293 [2024-11-27 21:58:59.382664] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:36.293 [2024-11-27 21:58:59.382681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.293 [2024-11-27 21:58:59.382690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:36.293 [2024-11-27 21:58:59.382699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.786 ms 00:26:36.293 [2024-11-27 21:58:59.382712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.293 [2024-11-27 21:58:59.398875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.293 [2024-11-27 21:58:59.398921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:36.293 [2024-11-27 21:58:59.398934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.111 ms 00:26:36.293 [2024-11-27 21:58:59.398944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.293 [2024-11-27 21:58:59.402112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.293 [2024-11-27 21:58:59.402158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:36.293 [2024-11-27 21:58:59.402168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.116 ms 00:26:36.293 [2024-11-27 21:58:59.402177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.293 [2024-11-27 21:58:59.404802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.293 [2024-11-27 21:58:59.404847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:36.293 [2024-11-27 21:58:59.404857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.581 ms 00:26:36.293 [2024-11-27 21:58:59.404865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.293 [2024-11-27 21:58:59.405217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.293 [2024-11-27 21:58:59.405232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:36.293 [2024-11-27 21:58:59.405241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:26:36.293 [2024-11-27 21:58:59.405252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.555 [2024-11-27 21:58:59.428362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.555 [2024-11-27 21:58:59.428419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:36.555 [2024-11-27 21:58:59.428433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.078 ms 00:26:36.555 [2024-11-27 21:58:59.428443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.555 [2024-11-27 21:58:59.436561] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:36.555 [2024-11-27 21:58:59.439650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.555 [2024-11-27 21:58:59.439691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:36.555 [2024-11-27 21:58:59.439704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.157 ms 00:26:36.555 [2024-11-27 21:58:59.439712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.555 [2024-11-27 21:58:59.439792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.555 [2024-11-27 21:58:59.439804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:36.555 [2024-11-27 21:58:59.439813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:26:36.555 [2024-11-27 21:58:59.439833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.555 [2024-11-27 21:58:59.441674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.555 [2024-11-27 21:58:59.441724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:36.555 [2024-11-27 21:58:59.441735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.802 ms 00:26:36.555 [2024-11-27 21:58:59.441744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.555 [2024-11-27 21:58:59.441771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.555 [2024-11-27 21:58:59.441781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:36.555 [2024-11-27 21:58:59.441789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:36.555 [2024-11-27 21:58:59.441797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.555 [2024-11-27 21:58:59.441837] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:36.555 [2024-11-27 21:58:59.441848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.555 [2024-11-27 21:58:59.441858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:36.555 [2024-11-27 21:58:59.441870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:36.555 [2024-11-27 21:58:59.441878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.555 [2024-11-27 21:58:59.447716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.555 [2024-11-27 21:58:59.447769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:36.555 [2024-11-27 21:58:59.447781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.803 ms 00:26:36.555 [2024-11-27 21:58:59.447794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.555 [2024-11-27 21:58:59.447879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:36.555 [2024-11-27 21:58:59.447889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:36.555 [2024-11-27 21:58:59.447899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:26:36.555 [2024-11-27 21:58:59.447911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:36.555 [2024-11-27 21:58:59.449214] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 135.417 ms, result 0 00:26:37.941  [2024-11-27T21:59:01.635Z] Copying: 1104/1048576 [kB] (1104 kBps) [2024-11-27T21:59:03.025Z] Copying: 4456/1048576 [kB] (3352 kBps) [2024-11-27T21:59:03.971Z] Copying: 15/1024 [MB] (10 MBps) [2024-11-27T21:59:04.916Z] Copying: 33/1024 [MB] (18 MBps) [2024-11-27T21:59:05.858Z] Copying: 49/1024 [MB] (15 MBps) [2024-11-27T21:59:06.800Z] Copying: 70/1024 [MB] (21 MBps) [2024-11-27T21:59:07.747Z] Copying: 92/1024 [MB] (22 MBps) [2024-11-27T21:59:08.692Z] Copying: 120/1024 [MB] (27 MBps) [2024-11-27T21:59:10.074Z] Copying: 139/1024 [MB] (19 MBps) [2024-11-27T21:59:10.646Z] Copying: 167/1024 [MB] (28 MBps) [2024-11-27T21:59:12.034Z] Copying: 183/1024 [MB] (15 MBps) [2024-11-27T21:59:12.981Z] Copying: 198/1024 [MB] (15 MBps) [2024-11-27T21:59:13.924Z] Copying: 214/1024 [MB] (16 MBps) [2024-11-27T21:59:14.866Z] Copying: 240/1024 [MB] (25 MBps) [2024-11-27T21:59:15.806Z] Copying: 261/1024 [MB] (20 MBps) [2024-11-27T21:59:16.745Z] Copying: 280/1024 [MB] (19 MBps) [2024-11-27T21:59:17.688Z] Copying: 308/1024 [MB] (28 MBps) [2024-11-27T21:59:19.073Z] Copying: 344/1024 [MB] (35 MBps) [2024-11-27T21:59:19.643Z] Copying: 374/1024 [MB] (29 MBps) [2024-11-27T21:59:21.027Z] Copying: 414/1024 [MB] (40 MBps) [2024-11-27T21:59:22.013Z] Copying: 447/1024 [MB] (33 MBps) [2024-11-27T21:59:22.996Z] Copying: 477/1024 [MB] (30 MBps) [2024-11-27T21:59:23.938Z] Copying: 509/1024 [MB] (31 MBps) [2024-11-27T21:59:24.883Z] Copying: 541/1024 [MB] (31 MBps) [2024-11-27T21:59:25.828Z] Copying: 570/1024 [MB] (29 MBps) [2024-11-27T21:59:26.770Z] Copying: 589/1024 [MB] (18 MBps) [2024-11-27T21:59:27.712Z] Copying: 605/1024 [MB] (15 MBps) [2024-11-27T21:59:28.654Z] Copying: 637/1024 [MB] (32 MBps) [2024-11-27T21:59:30.040Z] Copying: 663/1024 [MB] (25 MBps) [2024-11-27T21:59:30.981Z] Copying: 679/1024 [MB] (16 MBps) [2024-11-27T21:59:31.924Z] Copying: 699/1024 [MB] (19 MBps) [2024-11-27T21:59:32.865Z] Copying: 730/1024 [MB] (30 MBps) [2024-11-27T21:59:33.808Z] Copying: 760/1024 [MB] (29 MBps) [2024-11-27T21:59:34.751Z] Copying: 792/1024 [MB] (32 MBps) [2024-11-27T21:59:35.696Z] Copying: 819/1024 [MB] (27 MBps) [2024-11-27T21:59:36.648Z] Copying: 847/1024 [MB] (28 MBps) [2024-11-27T21:59:38.035Z] Copying: 875/1024 [MB] (27 MBps) [2024-11-27T21:59:38.980Z] Copying: 899/1024 [MB] (24 MBps) [2024-11-27T21:59:39.921Z] Copying: 923/1024 [MB] (24 MBps) [2024-11-27T21:59:40.863Z] Copying: 947/1024 [MB] (23 MBps) [2024-11-27T21:59:41.807Z] Copying: 975/1024 [MB] (27 MBps) [2024-11-27T21:59:42.752Z] Copying: 998/1024 [MB] (23 MBps) [2024-11-27T21:59:43.014Z] Copying: 1019/1024 [MB] (21 MBps) [2024-11-27T21:59:43.277Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-11-27 21:59:43.041512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.156 [2024-11-27 21:59:43.041604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:20.156 [2024-11-27 21:59:43.041622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:20.156 [2024-11-27 21:59:43.041633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.156 [2024-11-27 21:59:43.041661] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:20.156 [2024-11-27 21:59:43.042578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.156 [2024-11-27 21:59:43.042722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:20.156 [2024-11-27 21:59:43.042736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.899 ms 00:27:20.156 [2024-11-27 21:59:43.042747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.156 [2024-11-27 21:59:43.043027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.156 [2024-11-27 21:59:43.043042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:20.156 [2024-11-27 21:59:43.043052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:27:20.156 [2024-11-27 21:59:43.043062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.156 [2024-11-27 21:59:43.057481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.156 [2024-11-27 21:59:43.057535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:20.156 [2024-11-27 21:59:43.057554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.399 ms 00:27:20.156 [2024-11-27 21:59:43.057563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.156 [2024-11-27 21:59:43.063761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.156 [2024-11-27 21:59:43.063802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:20.156 [2024-11-27 21:59:43.063813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.161 ms 00:27:20.156 [2024-11-27 21:59:43.063822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.156 [2024-11-27 21:59:43.066750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.156 [2024-11-27 21:59:43.066800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:20.156 [2024-11-27 21:59:43.066810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.868 ms 00:27:20.156 [2024-11-27 21:59:43.066818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.156 [2024-11-27 21:59:43.071596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.156 [2024-11-27 21:59:43.071651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:20.156 [2024-11-27 21:59:43.071662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.737 ms 00:27:20.156 [2024-11-27 21:59:43.071671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.156 [2024-11-27 21:59:43.074206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.156 [2024-11-27 21:59:43.074251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:20.156 [2024-11-27 21:59:43.074263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.488 ms 00:27:20.156 [2024-11-27 21:59:43.074271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.156 [2024-11-27 21:59:43.076871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.156 [2024-11-27 21:59:43.076951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:20.156 [2024-11-27 21:59:43.076964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.579 ms 00:27:20.156 [2024-11-27 21:59:43.076972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.156 [2024-11-27 21:59:43.079054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.156 [2024-11-27 21:59:43.079100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:20.156 [2024-11-27 21:59:43.079110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.034 ms 00:27:20.156 [2024-11-27 21:59:43.079117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.156 [2024-11-27 21:59:43.080755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.156 [2024-11-27 21:59:43.080802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:20.156 [2024-11-27 21:59:43.080811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.597 ms 00:27:20.156 [2024-11-27 21:59:43.080819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.156 [2024-11-27 21:59:43.082439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.156 [2024-11-27 21:59:43.082484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:20.156 [2024-11-27 21:59:43.082494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.548 ms 00:27:20.156 [2024-11-27 21:59:43.082502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.156 [2024-11-27 21:59:43.082541] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:20.156 [2024-11-27 21:59:43.082558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:20.156 [2024-11-27 21:59:43.082570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:27:20.156 [2024-11-27 21:59:43.082580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:20.156 [2024-11-27 21:59:43.082590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:20.156 [2024-11-27 21:59:43.082599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:20.156 [2024-11-27 21:59:43.082608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:20.156 [2024-11-27 21:59:43.082618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:20.156 [2024-11-27 21:59:43.082627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:20.156 [2024-11-27 21:59:43.082636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:20.156 [2024-11-27 21:59:43.082645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:20.156 [2024-11-27 21:59:43.082654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:20.156 [2024-11-27 21:59:43.082663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:20.156 [2024-11-27 21:59:43.082672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:20.156 [2024-11-27 21:59:43.082681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:20.156 [2024-11-27 21:59:43.082690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:20.156 [2024-11-27 21:59:43.082699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:20.156 [2024-11-27 21:59:43.082707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:20.156 [2024-11-27 21:59:43.082716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:20.156 [2024-11-27 21:59:43.082724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:20.156 [2024-11-27 21:59:43.082732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:20.156 [2024-11-27 21:59:43.082741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:20.156 [2024-11-27 21:59:43.082758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:20.156 [2024-11-27 21:59:43.082766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:20.156 [2024-11-27 21:59:43.082775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:20.156 [2024-11-27 21:59:43.082787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:20.156 [2024-11-27 21:59:43.082796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:20.156 [2024-11-27 21:59:43.082805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:20.156 [2024-11-27 21:59:43.082812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:20.156 [2024-11-27 21:59:43.082820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:20.156 [2024-11-27 21:59:43.082827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:20.156 [2024-11-27 21:59:43.082836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:20.156 [2024-11-27 21:59:43.082844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:20.156 [2024-11-27 21:59:43.082852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:20.156 [2024-11-27 21:59:43.082860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:20.156 [2024-11-27 21:59:43.082869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:20.156 [2024-11-27 21:59:43.082876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:20.156 [2024-11-27 21:59:43.082884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:20.156 [2024-11-27 21:59:43.082892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:20.156 [2024-11-27 21:59:43.082899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:20.156 [2024-11-27 21:59:43.082907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:20.156 [2024-11-27 21:59:43.082915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.082922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.082931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.082948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.082955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.082962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.082970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.082978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.082986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.082994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:20.157 [2024-11-27 21:59:43.083427] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:20.157 [2024-11-27 21:59:43.083438] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d1938720-a751-43f9-bccf-05588f3ff9ab 00:27:20.157 [2024-11-27 21:59:43.083460] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:27:20.157 [2024-11-27 21:59:43.083467] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 158656 00:27:20.157 [2024-11-27 21:59:43.083474] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 156672 00:27:20.157 [2024-11-27 21:59:43.083483] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0127 00:27:20.157 [2024-11-27 21:59:43.083490] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:20.157 [2024-11-27 21:59:43.083499] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:20.157 [2024-11-27 21:59:43.083506] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:20.157 [2024-11-27 21:59:43.083513] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:20.157 [2024-11-27 21:59:43.083520] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:20.157 [2024-11-27 21:59:43.083528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.157 [2024-11-27 21:59:43.083540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:20.157 [2024-11-27 21:59:43.083549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.988 ms 00:27:20.157 [2024-11-27 21:59:43.083557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.157 [2024-11-27 21:59:43.085749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.157 [2024-11-27 21:59:43.085787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:20.157 [2024-11-27 21:59:43.085798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.171 ms 00:27:20.157 [2024-11-27 21:59:43.085806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.157 [2024-11-27 21:59:43.085929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.157 [2024-11-27 21:59:43.085947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:20.157 [2024-11-27 21:59:43.085957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:27:20.157 [2024-11-27 21:59:43.085965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.157 [2024-11-27 21:59:43.093160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.157 [2024-11-27 21:59:43.093216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:20.157 [2024-11-27 21:59:43.093226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.157 [2024-11-27 21:59:43.093235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.157 [2024-11-27 21:59:43.093291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.157 [2024-11-27 21:59:43.093304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:20.157 [2024-11-27 21:59:43.093311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.157 [2024-11-27 21:59:43.093319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.157 [2024-11-27 21:59:43.093401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.157 [2024-11-27 21:59:43.093412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:20.157 [2024-11-27 21:59:43.093421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.157 [2024-11-27 21:59:43.093429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.157 [2024-11-27 21:59:43.093444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.157 [2024-11-27 21:59:43.093456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:20.158 [2024-11-27 21:59:43.093464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.158 [2024-11-27 21:59:43.093474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.158 [2024-11-27 21:59:43.106137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.158 [2024-11-27 21:59:43.106187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:20.158 [2024-11-27 21:59:43.106197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.158 [2024-11-27 21:59:43.106205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.158 [2024-11-27 21:59:43.115965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.158 [2024-11-27 21:59:43.116012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:20.158 [2024-11-27 21:59:43.116030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.158 [2024-11-27 21:59:43.116038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.158 [2024-11-27 21:59:43.116088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.158 [2024-11-27 21:59:43.116099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:20.158 [2024-11-27 21:59:43.116107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.158 [2024-11-27 21:59:43.116115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.158 [2024-11-27 21:59:43.116141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.158 [2024-11-27 21:59:43.116158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:20.158 [2024-11-27 21:59:43.116166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.158 [2024-11-27 21:59:43.116174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.158 [2024-11-27 21:59:43.116249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.158 [2024-11-27 21:59:43.116259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:20.158 [2024-11-27 21:59:43.116270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.158 [2024-11-27 21:59:43.116278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.158 [2024-11-27 21:59:43.116307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.158 [2024-11-27 21:59:43.116317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:20.158 [2024-11-27 21:59:43.116325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.158 [2024-11-27 21:59:43.116348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.158 [2024-11-27 21:59:43.116405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.158 [2024-11-27 21:59:43.116416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:20.158 [2024-11-27 21:59:43.116424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.158 [2024-11-27 21:59:43.116432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.158 [2024-11-27 21:59:43.116478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:20.158 [2024-11-27 21:59:43.116493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:20.158 [2024-11-27 21:59:43.116501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:20.158 [2024-11-27 21:59:43.116510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.158 [2024-11-27 21:59:43.116646] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 75.107 ms, result 0 00:27:20.419 00:27:20.419 00:27:20.419 21:59:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:22.335 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:22.335 21:59:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:22.596 [2024-11-27 21:59:45.463654] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:27:22.596 [2024-11-27 21:59:45.463760] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92533 ] 00:27:22.596 [2024-11-27 21:59:45.607718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.596 [2024-11-27 21:59:45.630645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:22.859 [2024-11-27 21:59:45.721247] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:22.859 [2024-11-27 21:59:45.721312] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:22.859 [2024-11-27 21:59:45.880745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.859 [2024-11-27 21:59:45.880804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:22.859 [2024-11-27 21:59:45.880818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:22.859 [2024-11-27 21:59:45.880827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.860 [2024-11-27 21:59:45.880886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.860 [2024-11-27 21:59:45.880897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:22.860 [2024-11-27 21:59:45.880906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:27:22.860 [2024-11-27 21:59:45.880923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.860 [2024-11-27 21:59:45.880971] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:22.860 [2024-11-27 21:59:45.881382] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:22.860 [2024-11-27 21:59:45.881424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.860 [2024-11-27 21:59:45.881433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:22.860 [2024-11-27 21:59:45.881445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.467 ms 00:27:22.860 [2024-11-27 21:59:45.881459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.860 [2024-11-27 21:59:45.883113] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:22.860 [2024-11-27 21:59:45.886825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.860 [2024-11-27 21:59:45.886874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:22.860 [2024-11-27 21:59:45.886885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.714 ms 00:27:22.860 [2024-11-27 21:59:45.886907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.860 [2024-11-27 21:59:45.886981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.860 [2024-11-27 21:59:45.886991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:22.860 [2024-11-27 21:59:45.887000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:27:22.860 [2024-11-27 21:59:45.887008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.860 [2024-11-27 21:59:45.894919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.860 [2024-11-27 21:59:45.894959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:22.860 [2024-11-27 21:59:45.894977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.864 ms 00:27:22.860 [2024-11-27 21:59:45.894984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.860 [2024-11-27 21:59:45.895088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.860 [2024-11-27 21:59:45.895099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:22.860 [2024-11-27 21:59:45.895111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:27:22.860 [2024-11-27 21:59:45.895120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.860 [2024-11-27 21:59:45.895183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.860 [2024-11-27 21:59:45.895194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:22.860 [2024-11-27 21:59:45.895203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:22.860 [2024-11-27 21:59:45.895215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.860 [2024-11-27 21:59:45.895237] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:22.860 [2024-11-27 21:59:45.897216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.860 [2024-11-27 21:59:45.897253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:22.860 [2024-11-27 21:59:45.897264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.985 ms 00:27:22.860 [2024-11-27 21:59:45.897272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.860 [2024-11-27 21:59:45.897306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.860 [2024-11-27 21:59:45.897314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:22.860 [2024-11-27 21:59:45.897331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:22.860 [2024-11-27 21:59:45.897359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.860 [2024-11-27 21:59:45.897381] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:22.860 [2024-11-27 21:59:45.897401] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:22.860 [2024-11-27 21:59:45.897443] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:22.860 [2024-11-27 21:59:45.897466] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:22.860 [2024-11-27 21:59:45.897573] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:22.860 [2024-11-27 21:59:45.897584] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:22.860 [2024-11-27 21:59:45.897599] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:22.860 [2024-11-27 21:59:45.897610] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:22.860 [2024-11-27 21:59:45.897622] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:22.860 [2024-11-27 21:59:45.897631] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:22.860 [2024-11-27 21:59:45.897639] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:22.860 [2024-11-27 21:59:45.897647] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:22.860 [2024-11-27 21:59:45.897655] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:22.860 [2024-11-27 21:59:45.897663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.860 [2024-11-27 21:59:45.897674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:22.860 [2024-11-27 21:59:45.897683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:27:22.860 [2024-11-27 21:59:45.897691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.860 [2024-11-27 21:59:45.897776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.860 [2024-11-27 21:59:45.897788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:22.860 [2024-11-27 21:59:45.897796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:22.860 [2024-11-27 21:59:45.897804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.860 [2024-11-27 21:59:45.897906] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:22.860 [2024-11-27 21:59:45.897925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:22.860 [2024-11-27 21:59:45.897935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:22.860 [2024-11-27 21:59:45.897944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:22.860 [2024-11-27 21:59:45.897959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:22.860 [2024-11-27 21:59:45.897967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:22.860 [2024-11-27 21:59:45.897975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:22.860 [2024-11-27 21:59:45.897982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:22.860 [2024-11-27 21:59:45.897990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:22.860 [2024-11-27 21:59:45.898000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:22.860 [2024-11-27 21:59:45.898009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:22.860 [2024-11-27 21:59:45.898016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:22.860 [2024-11-27 21:59:45.898024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:22.860 [2024-11-27 21:59:45.898031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:22.860 [2024-11-27 21:59:45.898039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:22.860 [2024-11-27 21:59:45.898047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:22.860 [2024-11-27 21:59:45.898056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:22.860 [2024-11-27 21:59:45.898066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:22.860 [2024-11-27 21:59:45.898073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:22.860 [2024-11-27 21:59:45.898081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:22.860 [2024-11-27 21:59:45.898089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:22.860 [2024-11-27 21:59:45.898097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:22.860 [2024-11-27 21:59:45.898105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:22.860 [2024-11-27 21:59:45.898113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:22.860 [2024-11-27 21:59:45.898120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:22.860 [2024-11-27 21:59:45.898133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:22.860 [2024-11-27 21:59:45.898141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:22.860 [2024-11-27 21:59:45.898149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:22.860 [2024-11-27 21:59:45.898156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:22.860 [2024-11-27 21:59:45.898164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:22.860 [2024-11-27 21:59:45.898171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:22.860 [2024-11-27 21:59:45.898179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:22.860 [2024-11-27 21:59:45.898187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:22.860 [2024-11-27 21:59:45.898195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:22.860 [2024-11-27 21:59:45.898202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:22.860 [2024-11-27 21:59:45.898209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:22.860 [2024-11-27 21:59:45.898217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:22.860 [2024-11-27 21:59:45.898224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:22.860 [2024-11-27 21:59:45.898233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:22.860 [2024-11-27 21:59:45.898240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:22.860 [2024-11-27 21:59:45.898248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:22.860 [2024-11-27 21:59:45.898257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:22.860 [2024-11-27 21:59:45.898265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:22.860 [2024-11-27 21:59:45.898273] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:22.861 [2024-11-27 21:59:45.898285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:22.861 [2024-11-27 21:59:45.898293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:22.861 [2024-11-27 21:59:45.898300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:22.861 [2024-11-27 21:59:45.898309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:22.861 [2024-11-27 21:59:45.898315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:22.861 [2024-11-27 21:59:45.898323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:22.861 [2024-11-27 21:59:45.898331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:22.861 [2024-11-27 21:59:45.898353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:22.861 [2024-11-27 21:59:45.898360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:22.861 [2024-11-27 21:59:45.898369] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:22.861 [2024-11-27 21:59:45.898379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:22.861 [2024-11-27 21:59:45.898388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:22.861 [2024-11-27 21:59:45.898396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:22.861 [2024-11-27 21:59:45.898406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:22.861 [2024-11-27 21:59:45.898414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:22.861 [2024-11-27 21:59:45.898420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:22.861 [2024-11-27 21:59:45.898428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:22.861 [2024-11-27 21:59:45.898435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:22.861 [2024-11-27 21:59:45.898443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:22.861 [2024-11-27 21:59:45.898450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:22.861 [2024-11-27 21:59:45.898464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:22.861 [2024-11-27 21:59:45.898472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:22.861 [2024-11-27 21:59:45.898480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:22.861 [2024-11-27 21:59:45.898487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:22.861 [2024-11-27 21:59:45.898495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:22.861 [2024-11-27 21:59:45.898502] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:22.861 [2024-11-27 21:59:45.898511] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:22.861 [2024-11-27 21:59:45.898520] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:22.861 [2024-11-27 21:59:45.898528] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:22.861 [2024-11-27 21:59:45.898538] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:22.861 [2024-11-27 21:59:45.898546] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:22.861 [2024-11-27 21:59:45.898554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.861 [2024-11-27 21:59:45.898562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:22.861 [2024-11-27 21:59:45.898570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.717 ms 00:27:22.861 [2024-11-27 21:59:45.898581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.861 [2024-11-27 21:59:45.912183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.861 [2024-11-27 21:59:45.912228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:22.861 [2024-11-27 21:59:45.912249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.557 ms 00:27:22.861 [2024-11-27 21:59:45.912256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.861 [2024-11-27 21:59:45.912363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.861 [2024-11-27 21:59:45.912374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:22.861 [2024-11-27 21:59:45.912383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:27:22.861 [2024-11-27 21:59:45.912391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.861 [2024-11-27 21:59:45.935008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.861 [2024-11-27 21:59:45.935067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:22.861 [2024-11-27 21:59:45.935082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.559 ms 00:27:22.861 [2024-11-27 21:59:45.935102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.861 [2024-11-27 21:59:45.935157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.861 [2024-11-27 21:59:45.935173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:22.861 [2024-11-27 21:59:45.935188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:22.861 [2024-11-27 21:59:45.935198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.861 [2024-11-27 21:59:45.935788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.861 [2024-11-27 21:59:45.935831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:22.861 [2024-11-27 21:59:45.935844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.520 ms 00:27:22.861 [2024-11-27 21:59:45.935854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.861 [2024-11-27 21:59:45.936030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.861 [2024-11-27 21:59:45.936042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:22.861 [2024-11-27 21:59:45.936052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:27:22.861 [2024-11-27 21:59:45.936061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.861 [2024-11-27 21:59:45.944020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.861 [2024-11-27 21:59:45.944065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:22.861 [2024-11-27 21:59:45.944076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.935 ms 00:27:22.861 [2024-11-27 21:59:45.944085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.861 [2024-11-27 21:59:45.947878] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:22.861 [2024-11-27 21:59:45.947927] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:22.861 [2024-11-27 21:59:45.947943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.861 [2024-11-27 21:59:45.947952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:22.861 [2024-11-27 21:59:45.947961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.760 ms 00:27:22.861 [2024-11-27 21:59:45.947968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.861 [2024-11-27 21:59:45.963512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.861 [2024-11-27 21:59:45.963567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:22.861 [2024-11-27 21:59:45.963578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.493 ms 00:27:22.861 [2024-11-27 21:59:45.963596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.861 [2024-11-27 21:59:45.966468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.861 [2024-11-27 21:59:45.966510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:22.861 [2024-11-27 21:59:45.966520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.820 ms 00:27:22.861 [2024-11-27 21:59:45.966527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.861 [2024-11-27 21:59:45.968954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.861 [2024-11-27 21:59:45.968993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:22.861 [2024-11-27 21:59:45.969012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.381 ms 00:27:22.861 [2024-11-27 21:59:45.969019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.861 [2024-11-27 21:59:45.969387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.861 [2024-11-27 21:59:45.969402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:22.861 [2024-11-27 21:59:45.969413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:27:22.861 [2024-11-27 21:59:45.969420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.123 [2024-11-27 21:59:45.993558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.123 [2024-11-27 21:59:45.993623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:23.124 [2024-11-27 21:59:45.993637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.111 ms 00:27:23.124 [2024-11-27 21:59:45.993646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.124 [2024-11-27 21:59:46.001707] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:23.124 [2024-11-27 21:59:46.004688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.124 [2024-11-27 21:59:46.004737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:23.124 [2024-11-27 21:59:46.004749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.991 ms 00:27:23.124 [2024-11-27 21:59:46.004757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.124 [2024-11-27 21:59:46.004840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.124 [2024-11-27 21:59:46.004853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:23.124 [2024-11-27 21:59:46.004869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:27:23.124 [2024-11-27 21:59:46.004878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.124 [2024-11-27 21:59:46.005784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.124 [2024-11-27 21:59:46.005839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:23.124 [2024-11-27 21:59:46.005851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.868 ms 00:27:23.124 [2024-11-27 21:59:46.005859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.124 [2024-11-27 21:59:46.005887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.124 [2024-11-27 21:59:46.005897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:23.124 [2024-11-27 21:59:46.005907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:23.124 [2024-11-27 21:59:46.005915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.124 [2024-11-27 21:59:46.005960] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:23.124 [2024-11-27 21:59:46.005971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.124 [2024-11-27 21:59:46.005987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:23.124 [2024-11-27 21:59:46.006001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:23.124 [2024-11-27 21:59:46.006010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.124 [2024-11-27 21:59:46.011583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.124 [2024-11-27 21:59:46.011630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:23.124 [2024-11-27 21:59:46.011652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.552 ms 00:27:23.124 [2024-11-27 21:59:46.011661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.124 [2024-11-27 21:59:46.011750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.124 [2024-11-27 21:59:46.011761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:23.124 [2024-11-27 21:59:46.011771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:27:23.124 [2024-11-27 21:59:46.011786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.124 [2024-11-27 21:59:46.013411] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 132.128 ms, result 0 00:27:24.068  [2024-11-27T21:59:48.578Z] Copying: 20/1024 [MB] (20 MBps) [2024-11-27T21:59:49.523Z] Copying: 36/1024 [MB] (15 MBps) [2024-11-27T21:59:50.466Z] Copying: 54/1024 [MB] (18 MBps) [2024-11-27T21:59:51.469Z] Copying: 72/1024 [MB] (18 MBps) [2024-11-27T21:59:52.413Z] Copying: 89/1024 [MB] (17 MBps) [2024-11-27T21:59:53.355Z] Copying: 109/1024 [MB] (19 MBps) [2024-11-27T21:59:54.298Z] Copying: 120/1024 [MB] (11 MBps) [2024-11-27T21:59:55.242Z] Copying: 132/1024 [MB] (11 MBps) [2024-11-27T21:59:56.628Z] Copying: 145/1024 [MB] (13 MBps) [2024-11-27T21:59:57.201Z] Copying: 157/1024 [MB] (12 MBps) [2024-11-27T21:59:58.590Z] Copying: 168/1024 [MB] (10 MBps) [2024-11-27T21:59:59.531Z] Copying: 180/1024 [MB] (12 MBps) [2024-11-27T22:00:00.472Z] Copying: 192/1024 [MB] (11 MBps) [2024-11-27T22:00:01.411Z] Copying: 209/1024 [MB] (17 MBps) [2024-11-27T22:00:02.352Z] Copying: 224/1024 [MB] (14 MBps) [2024-11-27T22:00:03.298Z] Copying: 242/1024 [MB] (17 MBps) [2024-11-27T22:00:04.244Z] Copying: 254/1024 [MB] (12 MBps) [2024-11-27T22:00:05.189Z] Copying: 275/1024 [MB] (21 MBps) [2024-11-27T22:00:06.577Z] Copying: 288/1024 [MB] (12 MBps) [2024-11-27T22:00:07.518Z] Copying: 301/1024 [MB] (12 MBps) [2024-11-27T22:00:08.461Z] Copying: 317/1024 [MB] (16 MBps) [2024-11-27T22:00:09.405Z] Copying: 341/1024 [MB] (24 MBps) [2024-11-27T22:00:10.348Z] Copying: 362/1024 [MB] (20 MBps) [2024-11-27T22:00:11.291Z] Copying: 387/1024 [MB] (25 MBps) [2024-11-27T22:00:12.233Z] Copying: 399/1024 [MB] (11 MBps) [2024-11-27T22:00:13.620Z] Copying: 420/1024 [MB] (21 MBps) [2024-11-27T22:00:14.194Z] Copying: 433/1024 [MB] (12 MBps) [2024-11-27T22:00:15.580Z] Copying: 447/1024 [MB] (14 MBps) [2024-11-27T22:00:16.524Z] Copying: 464/1024 [MB] (16 MBps) [2024-11-27T22:00:17.470Z] Copying: 478/1024 [MB] (14 MBps) [2024-11-27T22:00:18.415Z] Copying: 496/1024 [MB] (17 MBps) [2024-11-27T22:00:19.360Z] Copying: 509/1024 [MB] (12 MBps) [2024-11-27T22:00:20.389Z] Copying: 521/1024 [MB] (12 MBps) [2024-11-27T22:00:21.335Z] Copying: 537/1024 [MB] (15 MBps) [2024-11-27T22:00:22.281Z] Copying: 555/1024 [MB] (17 MBps) [2024-11-27T22:00:23.227Z] Copying: 574/1024 [MB] (19 MBps) [2024-11-27T22:00:24.617Z] Copying: 592/1024 [MB] (18 MBps) [2024-11-27T22:00:25.189Z] Copying: 604/1024 [MB] (11 MBps) [2024-11-27T22:00:26.576Z] Copying: 616/1024 [MB] (11 MBps) [2024-11-27T22:00:27.521Z] Copying: 628/1024 [MB] (12 MBps) [2024-11-27T22:00:28.466Z] Copying: 640/1024 [MB] (12 MBps) [2024-11-27T22:00:29.418Z] Copying: 651/1024 [MB] (10 MBps) [2024-11-27T22:00:30.361Z] Copying: 661/1024 [MB] (10 MBps) [2024-11-27T22:00:31.305Z] Copying: 672/1024 [MB] (10 MBps) [2024-11-27T22:00:32.246Z] Copying: 683/1024 [MB] (10 MBps) [2024-11-27T22:00:33.629Z] Copying: 693/1024 [MB] (10 MBps) [2024-11-27T22:00:34.198Z] Copying: 704/1024 [MB] (10 MBps) [2024-11-27T22:00:35.580Z] Copying: 716/1024 [MB] (12 MBps) [2024-11-27T22:00:36.521Z] Copying: 729/1024 [MB] (13 MBps) [2024-11-27T22:00:37.457Z] Copying: 741/1024 [MB] (11 MBps) [2024-11-27T22:00:38.393Z] Copying: 752/1024 [MB] (10 MBps) [2024-11-27T22:00:39.333Z] Copying: 763/1024 [MB] (11 MBps) [2024-11-27T22:00:40.275Z] Copying: 775/1024 [MB] (11 MBps) [2024-11-27T22:00:41.221Z] Copying: 785/1024 [MB] (10 MBps) [2024-11-27T22:00:42.610Z] Copying: 802/1024 [MB] (16 MBps) [2024-11-27T22:00:43.555Z] Copying: 813/1024 [MB] (10 MBps) [2024-11-27T22:00:44.501Z] Copying: 823/1024 [MB] (10 MBps) [2024-11-27T22:00:45.443Z] Copying: 838/1024 [MB] (15 MBps) [2024-11-27T22:00:46.385Z] Copying: 853/1024 [MB] (14 MBps) [2024-11-27T22:00:47.330Z] Copying: 877/1024 [MB] (24 MBps) [2024-11-27T22:00:48.275Z] Copying: 895/1024 [MB] (18 MBps) [2024-11-27T22:00:49.302Z] Copying: 908/1024 [MB] (12 MBps) [2024-11-27T22:00:50.248Z] Copying: 922/1024 [MB] (14 MBps) [2024-11-27T22:00:51.190Z] Copying: 932/1024 [MB] (10 MBps) [2024-11-27T22:00:52.579Z] Copying: 947/1024 [MB] (14 MBps) [2024-11-27T22:00:53.519Z] Copying: 958/1024 [MB] (10 MBps) [2024-11-27T22:00:54.464Z] Copying: 978/1024 [MB] (20 MBps) [2024-11-27T22:00:55.408Z] Copying: 997/1024 [MB] (18 MBps) [2024-11-27T22:00:55.984Z] Copying: 1011/1024 [MB] (14 MBps) [2024-11-27T22:00:55.984Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-27 22:00:55.887197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.864 [2024-11-27 22:00:55.887296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:32.864 [2024-11-27 22:00:55.887331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:32.864 [2024-11-27 22:00:55.887368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.864 [2024-11-27 22:00:55.887396] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:32.864 [2024-11-27 22:00:55.888215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.864 [2024-11-27 22:00:55.888258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:32.864 [2024-11-27 22:00:55.888272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.800 ms 00:28:32.864 [2024-11-27 22:00:55.888283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.864 [2024-11-27 22:00:55.888573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.864 [2024-11-27 22:00:55.888594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:32.864 [2024-11-27 22:00:55.888605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:28:32.864 [2024-11-27 22:00:55.888620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.864 [2024-11-27 22:00:55.894376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.864 [2024-11-27 22:00:55.894417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:32.864 [2024-11-27 22:00:55.894431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.737 ms 00:28:32.864 [2024-11-27 22:00:55.894442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.864 [2024-11-27 22:00:55.902352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.864 [2024-11-27 22:00:55.902391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:32.864 [2024-11-27 22:00:55.902401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.881 ms 00:28:32.864 [2024-11-27 22:00:55.902410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.864 [2024-11-27 22:00:55.905946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.864 [2024-11-27 22:00:55.906011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:32.864 [2024-11-27 22:00:55.906025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.422 ms 00:28:32.864 [2024-11-27 22:00:55.906033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.864 [2024-11-27 22:00:55.911307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.864 [2024-11-27 22:00:55.911373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:32.864 [2024-11-27 22:00:55.911385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.220 ms 00:28:32.864 [2024-11-27 22:00:55.911395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.864 [2024-11-27 22:00:55.916485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.864 [2024-11-27 22:00:55.916533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:32.864 [2024-11-27 22:00:55.916544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.037 ms 00:28:32.864 [2024-11-27 22:00:55.916562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.864 [2024-11-27 22:00:55.919867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.864 [2024-11-27 22:00:55.919919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:32.864 [2024-11-27 22:00:55.919930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.287 ms 00:28:32.864 [2024-11-27 22:00:55.919938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.864 [2024-11-27 22:00:55.923098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.864 [2024-11-27 22:00:55.923146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:32.864 [2024-11-27 22:00:55.923156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.114 ms 00:28:32.864 [2024-11-27 22:00:55.923164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.864 [2024-11-27 22:00:55.925832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.864 [2024-11-27 22:00:55.925880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:32.864 [2024-11-27 22:00:55.925892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.620 ms 00:28:32.864 [2024-11-27 22:00:55.925901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.864 [2024-11-27 22:00:55.928321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.864 [2024-11-27 22:00:55.928378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:32.864 [2024-11-27 22:00:55.928389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.340 ms 00:28:32.864 [2024-11-27 22:00:55.928396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.864 [2024-11-27 22:00:55.928439] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:32.864 [2024-11-27 22:00:55.928454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:32.864 [2024-11-27 22:00:55.928466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:28:32.864 [2024-11-27 22:00:55.928475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:32.864 [2024-11-27 22:00:55.928484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:32.864 [2024-11-27 22:00:55.928492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:32.864 [2024-11-27 22:00:55.928500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:32.864 [2024-11-27 22:00:55.928509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:32.864 [2024-11-27 22:00:55.928517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:32.864 [2024-11-27 22:00:55.928527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:32.864 [2024-11-27 22:00:55.928535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:32.864 [2024-11-27 22:00:55.928543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:32.864 [2024-11-27 22:00:55.928550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:32.864 [2024-11-27 22:00:55.928558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:32.864 [2024-11-27 22:00:55.928566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:32.864 [2024-11-27 22:00:55.928574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:32.864 [2024-11-27 22:00:55.928582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:32.864 [2024-11-27 22:00:55.928590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:32.864 [2024-11-27 22:00:55.928598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:32.864 [2024-11-27 22:00:55.928606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:32.864 [2024-11-27 22:00:55.928613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:32.864 [2024-11-27 22:00:55.928621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:32.864 [2024-11-27 22:00:55.928628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:32.864 [2024-11-27 22:00:55.928636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:32.864 [2024-11-27 22:00:55.928643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:32.864 [2024-11-27 22:00:55.928650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:32.864 [2024-11-27 22:00:55.928658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:32.864 [2024-11-27 22:00:55.928666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.928673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.928680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.928688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.928695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.928702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.928709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.928716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.928724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.928731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.928738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.928746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.928752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.928760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.928767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.928775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.928797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.928805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.928813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.928820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.928829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.928836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.928873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.928886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.928898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.928911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.928924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.928936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.928948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.928961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.928972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.928985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.928997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.929009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.929022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.929035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.929048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.929060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.929072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.929085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.929098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.929106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.929114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.929122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.929130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.929139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.929147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.929155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.929163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.929171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.929179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.929187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.929195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.929203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.929216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.929224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.929232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.929240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.929248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.929255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.929263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.929270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.929278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.929285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.929293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.929300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.929307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.929316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.929324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.929331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.929371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.929378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.929386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.929394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:32.865 [2024-11-27 22:00:55.929411] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:32.865 [2024-11-27 22:00:55.929421] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d1938720-a751-43f9-bccf-05588f3ff9ab 00:28:32.866 [2024-11-27 22:00:55.929430] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:28:32.866 [2024-11-27 22:00:55.929438] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:32.866 [2024-11-27 22:00:55.929446] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:32.866 [2024-11-27 22:00:55.929454] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:32.866 [2024-11-27 22:00:55.929462] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:32.866 [2024-11-27 22:00:55.929471] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:32.866 [2024-11-27 22:00:55.929484] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:32.866 [2024-11-27 22:00:55.929490] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:32.866 [2024-11-27 22:00:55.929496] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:32.866 [2024-11-27 22:00:55.929503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.866 [2024-11-27 22:00:55.929522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:32.866 [2024-11-27 22:00:55.929531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.066 ms 00:28:32.866 [2024-11-27 22:00:55.929539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.866 [2024-11-27 22:00:55.932074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.866 [2024-11-27 22:00:55.932117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:32.866 [2024-11-27 22:00:55.932127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.513 ms 00:28:32.866 [2024-11-27 22:00:55.932135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.866 [2024-11-27 22:00:55.932264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.866 [2024-11-27 22:00:55.932274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:32.866 [2024-11-27 22:00:55.932282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:28:32.866 [2024-11-27 22:00:55.932290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.866 [2024-11-27 22:00:55.940230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.866 [2024-11-27 22:00:55.940286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:32.866 [2024-11-27 22:00:55.940297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.866 [2024-11-27 22:00:55.940309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.866 [2024-11-27 22:00:55.940385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.866 [2024-11-27 22:00:55.940394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:32.866 [2024-11-27 22:00:55.940408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.866 [2024-11-27 22:00:55.940415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.866 [2024-11-27 22:00:55.940481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.866 [2024-11-27 22:00:55.940492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:32.866 [2024-11-27 22:00:55.940500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.866 [2024-11-27 22:00:55.940508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.866 [2024-11-27 22:00:55.940529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.866 [2024-11-27 22:00:55.940537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:32.866 [2024-11-27 22:00:55.940545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.866 [2024-11-27 22:00:55.940553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.866 [2024-11-27 22:00:55.955055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.866 [2024-11-27 22:00:55.955101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:32.866 [2024-11-27 22:00:55.955111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.866 [2024-11-27 22:00:55.955133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.866 [2024-11-27 22:00:55.966046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.866 [2024-11-27 22:00:55.966104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:32.866 [2024-11-27 22:00:55.966116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.866 [2024-11-27 22:00:55.966123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.866 [2024-11-27 22:00:55.966171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.866 [2024-11-27 22:00:55.966187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:32.866 [2024-11-27 22:00:55.966196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.866 [2024-11-27 22:00:55.966204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.866 [2024-11-27 22:00:55.966247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.866 [2024-11-27 22:00:55.966257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:32.866 [2024-11-27 22:00:55.966265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.866 [2024-11-27 22:00:55.966273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.866 [2024-11-27 22:00:55.966366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.866 [2024-11-27 22:00:55.966377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:32.866 [2024-11-27 22:00:55.966385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.866 [2024-11-27 22:00:55.966393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.866 [2024-11-27 22:00:55.966423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.866 [2024-11-27 22:00:55.966436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:32.866 [2024-11-27 22:00:55.966445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.866 [2024-11-27 22:00:55.966452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.866 [2024-11-27 22:00:55.966498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.866 [2024-11-27 22:00:55.966507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:32.866 [2024-11-27 22:00:55.966518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.866 [2024-11-27 22:00:55.966531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.866 [2024-11-27 22:00:55.966583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.866 [2024-11-27 22:00:55.966593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:32.866 [2024-11-27 22:00:55.966602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.866 [2024-11-27 22:00:55.966617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.866 [2024-11-27 22:00:55.966763] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 79.523 ms, result 0 00:28:33.128 00:28:33.128 00:28:33.128 22:00:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:28:35.682 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:28:35.682 22:00:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:28:35.682 22:00:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:28:35.682 22:00:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:35.682 22:00:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:35.682 22:00:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:28:35.682 22:00:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:35.682 22:00:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:28:35.682 Process with pid 90838 is not found 00:28:35.682 22:00:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 90838 00:28:35.682 22:00:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 90838 ']' 00:28:35.682 22:00:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 90838 00:28:35.682 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (90838) - No such process 00:28:35.682 22:00:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 90838 is not found' 00:28:35.682 22:00:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:28:35.945 Remove shared memory files 00:28:35.945 22:00:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:28:35.945 22:00:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:35.945 22:00:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:35.945 22:00:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:35.945 22:00:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:28:35.945 22:00:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:35.945 22:00:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:35.945 00:28:35.945 real 3m52.137s 00:28:35.945 user 4m10.992s 00:28:35.945 sys 0m24.663s 00:28:35.945 22:00:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:35.945 ************************************ 00:28:35.945 END TEST ftl_dirty_shutdown 00:28:35.945 ************************************ 00:28:35.945 22:00:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:35.945 22:00:59 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:28:35.945 22:00:59 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:35.945 22:00:59 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:35.945 22:00:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:35.945 ************************************ 00:28:35.945 START TEST ftl_upgrade_shutdown 00:28:35.945 ************************************ 00:28:35.945 22:00:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:28:36.208 * Looking for test storage... 00:28:36.208 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:36.208 22:00:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:36.208 22:00:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:28:36.208 22:00:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:36.208 22:00:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:36.208 22:00:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:36.208 22:00:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:36.208 22:00:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:36.208 22:00:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:36.208 22:00:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:36.208 22:00:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:36.208 22:00:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:36.208 22:00:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:36.208 22:00:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:36.208 22:00:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:36.208 22:00:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:36.208 22:00:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:36.208 22:00:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:28:36.208 22:00:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:36.208 22:00:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:36.208 22:00:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:36.208 22:00:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:36.208 22:00:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:36.208 22:00:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:36.208 22:00:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:36.208 22:00:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:36.208 22:00:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:36.208 22:00:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:36.208 22:00:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:36.208 22:00:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:36.208 22:00:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:36.208 22:00:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:36.208 22:00:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:28:36.208 22:00:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:36.208 22:00:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:36.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:36.208 --rc genhtml_branch_coverage=1 00:28:36.208 --rc genhtml_function_coverage=1 00:28:36.208 --rc genhtml_legend=1 00:28:36.208 --rc geninfo_all_blocks=1 00:28:36.208 --rc geninfo_unexecuted_blocks=1 00:28:36.208 00:28:36.208 ' 00:28:36.208 22:00:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:36.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:36.208 --rc genhtml_branch_coverage=1 00:28:36.208 --rc genhtml_function_coverage=1 00:28:36.208 --rc genhtml_legend=1 00:28:36.208 --rc geninfo_all_blocks=1 00:28:36.208 --rc geninfo_unexecuted_blocks=1 00:28:36.208 00:28:36.208 ' 00:28:36.208 22:00:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:36.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:36.208 --rc genhtml_branch_coverage=1 00:28:36.208 --rc genhtml_function_coverage=1 00:28:36.208 --rc genhtml_legend=1 00:28:36.208 --rc geninfo_all_blocks=1 00:28:36.209 --rc geninfo_unexecuted_blocks=1 00:28:36.209 00:28:36.209 ' 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:36.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:36.209 --rc genhtml_branch_coverage=1 00:28:36.209 --rc genhtml_function_coverage=1 00:28:36.209 --rc genhtml_legend=1 00:28:36.209 --rc geninfo_all_blocks=1 00:28:36.209 --rc geninfo_unexecuted_blocks=1 00:28:36.209 00:28:36.209 ' 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=93354 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 93354 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 93354 ']' 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:36.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:36.209 22:00:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:28:36.209 [2024-11-27 22:00:59.292498] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:28:36.209 [2024-11-27 22:00:59.292648] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93354 ] 00:28:36.470 [2024-11-27 22:00:59.434896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.470 [2024-11-27 22:00:59.463898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.042 22:01:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:37.042 22:01:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:28:37.042 22:01:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:37.042 22:01:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:28:37.042 22:01:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:28:37.042 22:01:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:37.042 22:01:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:28:37.042 22:01:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:37.042 22:01:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:28:37.042 22:01:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:37.042 22:01:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:28:37.042 22:01:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:37.042 22:01:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:28:37.042 22:01:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:37.042 22:01:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:28:37.042 22:01:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:37.042 22:01:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:28:37.042 22:01:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:28:37.042 22:01:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:28:37.042 22:01:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:37.042 22:01:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:28:37.042 22:01:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:28:37.303 22:01:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:28:37.564 22:01:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:28:37.564 22:01:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:28:37.564 22:01:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:28:37.564 22:01:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:28:37.564 22:01:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:37.564 22:01:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:37.564 22:01:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:37.564 22:01:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:28:37.564 22:01:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:37.564 { 00:28:37.564 "name": "basen1", 00:28:37.564 "aliases": [ 00:28:37.564 "5da6841b-8077-4393-a575-9c362d87587f" 00:28:37.564 ], 00:28:37.564 "product_name": "NVMe disk", 00:28:37.564 "block_size": 4096, 00:28:37.564 "num_blocks": 1310720, 00:28:37.564 "uuid": "5da6841b-8077-4393-a575-9c362d87587f", 00:28:37.564 "numa_id": -1, 00:28:37.564 "assigned_rate_limits": { 00:28:37.564 "rw_ios_per_sec": 0, 00:28:37.564 "rw_mbytes_per_sec": 0, 00:28:37.564 "r_mbytes_per_sec": 0, 00:28:37.564 "w_mbytes_per_sec": 0 00:28:37.564 }, 00:28:37.564 "claimed": true, 00:28:37.564 "claim_type": "read_many_write_one", 00:28:37.564 "zoned": false, 00:28:37.564 "supported_io_types": { 00:28:37.564 "read": true, 00:28:37.564 "write": true, 00:28:37.564 "unmap": true, 00:28:37.564 "flush": true, 00:28:37.564 "reset": true, 00:28:37.564 "nvme_admin": true, 00:28:37.564 "nvme_io": true, 00:28:37.564 "nvme_io_md": false, 00:28:37.564 "write_zeroes": true, 00:28:37.564 "zcopy": false, 00:28:37.564 "get_zone_info": false, 00:28:37.564 "zone_management": false, 00:28:37.564 "zone_append": false, 00:28:37.564 "compare": true, 00:28:37.564 "compare_and_write": false, 00:28:37.564 "abort": true, 00:28:37.564 "seek_hole": false, 00:28:37.564 "seek_data": false, 00:28:37.564 "copy": true, 00:28:37.564 "nvme_iov_md": false 00:28:37.564 }, 00:28:37.564 "driver_specific": { 00:28:37.564 "nvme": [ 00:28:37.564 { 00:28:37.564 "pci_address": "0000:00:11.0", 00:28:37.564 "trid": { 00:28:37.564 "trtype": "PCIe", 00:28:37.564 "traddr": "0000:00:11.0" 00:28:37.564 }, 00:28:37.564 "ctrlr_data": { 00:28:37.564 "cntlid": 0, 00:28:37.564 "vendor_id": "0x1b36", 00:28:37.564 "model_number": "QEMU NVMe Ctrl", 00:28:37.564 "serial_number": "12341", 00:28:37.564 "firmware_revision": "8.0.0", 00:28:37.564 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:37.564 "oacs": { 00:28:37.564 "security": 0, 00:28:37.564 "format": 1, 00:28:37.564 "firmware": 0, 00:28:37.564 "ns_manage": 1 00:28:37.564 }, 00:28:37.564 "multi_ctrlr": false, 00:28:37.564 "ana_reporting": false 00:28:37.564 }, 00:28:37.564 "vs": { 00:28:37.564 "nvme_version": "1.4" 00:28:37.564 }, 00:28:37.564 "ns_data": { 00:28:37.564 "id": 1, 00:28:37.564 "can_share": false 00:28:37.564 } 00:28:37.564 } 00:28:37.564 ], 00:28:37.564 "mp_policy": "active_passive" 00:28:37.564 } 00:28:37.564 } 00:28:37.564 ]' 00:28:37.564 22:01:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:37.826 22:01:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:37.826 22:01:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:37.826 22:01:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:28:37.826 22:01:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:28:37.826 22:01:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:28:37.826 22:01:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:28:37.826 22:01:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:28:37.826 22:01:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:28:37.826 22:01:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:37.826 22:01:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:38.086 22:01:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=4cd6e932-b230-4a4c-8502-ff9747bb094d 00:28:38.086 22:01:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:28:38.086 22:01:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4cd6e932-b230-4a4c-8502-ff9747bb094d 00:28:38.345 22:01:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:28:38.345 22:01:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=916e6ba0-1a4d-4086-b132-f62df3b11962 00:28:38.345 22:01:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 916e6ba0-1a4d-4086-b132-f62df3b11962 00:28:38.603 22:01:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=676da863-e86b-477b-b0b1-2afa42c5f37d 00:28:38.604 22:01:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 676da863-e86b-477b-b0b1-2afa42c5f37d ]] 00:28:38.604 22:01:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 676da863-e86b-477b-b0b1-2afa42c5f37d 5120 00:28:38.604 22:01:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:28:38.604 22:01:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:38.604 22:01:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=676da863-e86b-477b-b0b1-2afa42c5f37d 00:28:38.604 22:01:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:28:38.604 22:01:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 676da863-e86b-477b-b0b1-2afa42c5f37d 00:28:38.604 22:01:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=676da863-e86b-477b-b0b1-2afa42c5f37d 00:28:38.604 22:01:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:38.604 22:01:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:38.604 22:01:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:38.604 22:01:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 676da863-e86b-477b-b0b1-2afa42c5f37d 00:28:38.861 22:01:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:38.861 { 00:28:38.861 "name": "676da863-e86b-477b-b0b1-2afa42c5f37d", 00:28:38.862 "aliases": [ 00:28:38.862 "lvs/basen1p0" 00:28:38.862 ], 00:28:38.862 "product_name": "Logical Volume", 00:28:38.862 "block_size": 4096, 00:28:38.862 "num_blocks": 5242880, 00:28:38.862 "uuid": "676da863-e86b-477b-b0b1-2afa42c5f37d", 00:28:38.862 "assigned_rate_limits": { 00:28:38.862 "rw_ios_per_sec": 0, 00:28:38.862 "rw_mbytes_per_sec": 0, 00:28:38.862 "r_mbytes_per_sec": 0, 00:28:38.862 "w_mbytes_per_sec": 0 00:28:38.862 }, 00:28:38.862 "claimed": false, 00:28:38.862 "zoned": false, 00:28:38.862 "supported_io_types": { 00:28:38.862 "read": true, 00:28:38.862 "write": true, 00:28:38.862 "unmap": true, 00:28:38.862 "flush": false, 00:28:38.862 "reset": true, 00:28:38.862 "nvme_admin": false, 00:28:38.862 "nvme_io": false, 00:28:38.862 "nvme_io_md": false, 00:28:38.862 "write_zeroes": true, 00:28:38.862 "zcopy": false, 00:28:38.862 "get_zone_info": false, 00:28:38.862 "zone_management": false, 00:28:38.862 "zone_append": false, 00:28:38.862 "compare": false, 00:28:38.862 "compare_and_write": false, 00:28:38.862 "abort": false, 00:28:38.862 "seek_hole": true, 00:28:38.862 "seek_data": true, 00:28:38.862 "copy": false, 00:28:38.862 "nvme_iov_md": false 00:28:38.862 }, 00:28:38.862 "driver_specific": { 00:28:38.862 "lvol": { 00:28:38.862 "lvol_store_uuid": "916e6ba0-1a4d-4086-b132-f62df3b11962", 00:28:38.862 "base_bdev": "basen1", 00:28:38.862 "thin_provision": true, 00:28:38.862 "num_allocated_clusters": 0, 00:28:38.862 "snapshot": false, 00:28:38.862 "clone": false, 00:28:38.862 "esnap_clone": false 00:28:38.862 } 00:28:38.862 } 00:28:38.862 } 00:28:38.862 ]' 00:28:38.862 22:01:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:38.862 22:01:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:38.862 22:01:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:38.862 22:01:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:28:38.862 22:01:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:28:38.862 22:01:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:28:38.862 22:01:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:28:38.862 22:01:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:28:38.862 22:01:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:28:39.119 22:01:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:28:39.119 22:01:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:28:39.119 22:01:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:28:39.376 22:01:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:28:39.376 22:01:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:28:39.376 22:01:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 676da863-e86b-477b-b0b1-2afa42c5f37d -c cachen1p0 --l2p_dram_limit 2 00:28:39.636 [2024-11-27 22:01:02.540057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.636 [2024-11-27 22:01:02.540102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:39.636 [2024-11-27 22:01:02.540114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:39.636 [2024-11-27 22:01:02.540122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.636 [2024-11-27 22:01:02.540166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.636 [2024-11-27 22:01:02.540177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:39.636 [2024-11-27 22:01:02.540183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:28:39.636 [2024-11-27 22:01:02.540192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.636 [2024-11-27 22:01:02.540207] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:39.636 [2024-11-27 22:01:02.540488] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:39.636 [2024-11-27 22:01:02.540506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.636 [2024-11-27 22:01:02.540516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:39.636 [2024-11-27 22:01:02.540522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.303 ms 00:28:39.636 [2024-11-27 22:01:02.540530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.636 [2024-11-27 22:01:02.540581] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 9ceb4166-f435-44b7-9a91-79e405710158 00:28:39.636 [2024-11-27 22:01:02.541558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.636 [2024-11-27 22:01:02.541585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:28:39.636 [2024-11-27 22:01:02.541594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:28:39.636 [2024-11-27 22:01:02.541601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.636 [2024-11-27 22:01:02.546315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.636 [2024-11-27 22:01:02.546348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:39.636 [2024-11-27 22:01:02.546358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.680 ms 00:28:39.636 [2024-11-27 22:01:02.546364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.636 [2024-11-27 22:01:02.546403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.636 [2024-11-27 22:01:02.546412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:39.636 [2024-11-27 22:01:02.546420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:28:39.636 [2024-11-27 22:01:02.546426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.636 [2024-11-27 22:01:02.546456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.636 [2024-11-27 22:01:02.546463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:39.636 [2024-11-27 22:01:02.546472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:39.636 [2024-11-27 22:01:02.546479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.636 [2024-11-27 22:01:02.546498] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:39.636 [2024-11-27 22:01:02.547757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.636 [2024-11-27 22:01:02.547786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:39.636 [2024-11-27 22:01:02.547794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.266 ms 00:28:39.636 [2024-11-27 22:01:02.547801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.636 [2024-11-27 22:01:02.547820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.636 [2024-11-27 22:01:02.547828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:39.636 [2024-11-27 22:01:02.547834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:39.636 [2024-11-27 22:01:02.547843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.636 [2024-11-27 22:01:02.547866] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:28:39.636 [2024-11-27 22:01:02.547976] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:39.636 [2024-11-27 22:01:02.547985] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:39.636 [2024-11-27 22:01:02.547994] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:39.636 [2024-11-27 22:01:02.548003] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:39.636 [2024-11-27 22:01:02.548014] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:39.636 [2024-11-27 22:01:02.548021] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:39.636 [2024-11-27 22:01:02.548030] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:39.636 [2024-11-27 22:01:02.548035] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:39.636 [2024-11-27 22:01:02.548042] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:39.636 [2024-11-27 22:01:02.548048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.636 [2024-11-27 22:01:02.548055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:39.636 [2024-11-27 22:01:02.548061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.183 ms 00:28:39.636 [2024-11-27 22:01:02.548068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.636 [2024-11-27 22:01:02.548133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.636 [2024-11-27 22:01:02.548142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:39.636 [2024-11-27 22:01:02.548148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:28:39.636 [2024-11-27 22:01:02.548156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.636 [2024-11-27 22:01:02.548227] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:39.636 [2024-11-27 22:01:02.548236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:39.636 [2024-11-27 22:01:02.548242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:39.636 [2024-11-27 22:01:02.548249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:39.636 [2024-11-27 22:01:02.548255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:39.636 [2024-11-27 22:01:02.548262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:39.636 [2024-11-27 22:01:02.548268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:39.636 [2024-11-27 22:01:02.548274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:39.636 [2024-11-27 22:01:02.548280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:39.636 [2024-11-27 22:01:02.548288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:39.636 [2024-11-27 22:01:02.548293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:39.636 [2024-11-27 22:01:02.548299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:39.637 [2024-11-27 22:01:02.548304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:39.637 [2024-11-27 22:01:02.548312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:39.637 [2024-11-27 22:01:02.548317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:39.637 [2024-11-27 22:01:02.548324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:39.637 [2024-11-27 22:01:02.548330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:39.637 [2024-11-27 22:01:02.548348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:39.637 [2024-11-27 22:01:02.548354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:39.637 [2024-11-27 22:01:02.548361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:39.637 [2024-11-27 22:01:02.548366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:39.637 [2024-11-27 22:01:02.548373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:39.637 [2024-11-27 22:01:02.548378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:39.637 [2024-11-27 22:01:02.548384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:39.637 [2024-11-27 22:01:02.548389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:39.637 [2024-11-27 22:01:02.548396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:39.637 [2024-11-27 22:01:02.548401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:39.637 [2024-11-27 22:01:02.548408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:39.637 [2024-11-27 22:01:02.548413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:39.637 [2024-11-27 22:01:02.548421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:39.637 [2024-11-27 22:01:02.548426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:39.637 [2024-11-27 22:01:02.548433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:39.637 [2024-11-27 22:01:02.548437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:39.637 [2024-11-27 22:01:02.548444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:39.637 [2024-11-27 22:01:02.548449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:39.637 [2024-11-27 22:01:02.548457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:39.637 [2024-11-27 22:01:02.548461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:39.637 [2024-11-27 22:01:02.548468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:39.637 [2024-11-27 22:01:02.548473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:39.637 [2024-11-27 22:01:02.548479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:39.637 [2024-11-27 22:01:02.548484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:39.637 [2024-11-27 22:01:02.548491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:39.637 [2024-11-27 22:01:02.548495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:39.637 [2024-11-27 22:01:02.548502] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:39.637 [2024-11-27 22:01:02.548508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:39.637 [2024-11-27 22:01:02.548516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:39.637 [2024-11-27 22:01:02.548522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:39.637 [2024-11-27 22:01:02.548530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:39.637 [2024-11-27 22:01:02.548540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:39.637 [2024-11-27 22:01:02.548547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:39.637 [2024-11-27 22:01:02.548552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:39.637 [2024-11-27 22:01:02.548559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:39.637 [2024-11-27 22:01:02.548564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:39.637 [2024-11-27 22:01:02.548573] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:39.637 [2024-11-27 22:01:02.548580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:39.637 [2024-11-27 22:01:02.548588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:39.637 [2024-11-27 22:01:02.548593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:39.637 [2024-11-27 22:01:02.548601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:39.637 [2024-11-27 22:01:02.548607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:39.637 [2024-11-27 22:01:02.548614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:39.637 [2024-11-27 22:01:02.548619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:39.637 [2024-11-27 22:01:02.548627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:39.637 [2024-11-27 22:01:02.548633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:39.637 [2024-11-27 22:01:02.548639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:39.637 [2024-11-27 22:01:02.548645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:39.637 [2024-11-27 22:01:02.548652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:39.637 [2024-11-27 22:01:02.548658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:39.637 [2024-11-27 22:01:02.548664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:39.637 [2024-11-27 22:01:02.548670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:39.637 [2024-11-27 22:01:02.548677] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:39.637 [2024-11-27 22:01:02.548683] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:39.637 [2024-11-27 22:01:02.548691] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:39.637 [2024-11-27 22:01:02.548696] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:39.637 [2024-11-27 22:01:02.548703] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:39.637 [2024-11-27 22:01:02.548708] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:39.637 [2024-11-27 22:01:02.548716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.637 [2024-11-27 22:01:02.548722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:39.637 [2024-11-27 22:01:02.548730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.538 ms 00:28:39.637 [2024-11-27 22:01:02.548738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.637 [2024-11-27 22:01:02.548768] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:28:39.637 [2024-11-27 22:01:02.548776] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:28:43.838 [2024-11-27 22:01:06.365531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.838 [2024-11-27 22:01:06.365623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:28:43.838 [2024-11-27 22:01:06.365656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3816.738 ms 00:28:43.838 [2024-11-27 22:01:06.365666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.838 [2024-11-27 22:01:06.379734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.838 [2024-11-27 22:01:06.379794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:43.838 [2024-11-27 22:01:06.379812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.937 ms 00:28:43.838 [2024-11-27 22:01:06.379822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.838 [2024-11-27 22:01:06.379908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.838 [2024-11-27 22:01:06.379919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:43.838 [2024-11-27 22:01:06.379931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:28:43.838 [2024-11-27 22:01:06.379942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.838 [2024-11-27 22:01:06.392423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.838 [2024-11-27 22:01:06.392471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:43.838 [2024-11-27 22:01:06.392486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.435 ms 00:28:43.838 [2024-11-27 22:01:06.392504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.838 [2024-11-27 22:01:06.392544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.838 [2024-11-27 22:01:06.392552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:43.838 [2024-11-27 22:01:06.392564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:43.838 [2024-11-27 22:01:06.392572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.838 [2024-11-27 22:01:06.393170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.838 [2024-11-27 22:01:06.393214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:43.838 [2024-11-27 22:01:06.393229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.537 ms 00:28:43.838 [2024-11-27 22:01:06.393243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.838 [2024-11-27 22:01:06.393301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.838 [2024-11-27 22:01:06.393311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:43.838 [2024-11-27 22:01:06.393324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:28:43.838 [2024-11-27 22:01:06.393333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.838 [2024-11-27 22:01:06.401672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.838 [2024-11-27 22:01:06.401719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:43.838 [2024-11-27 22:01:06.401732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.270 ms 00:28:43.838 [2024-11-27 22:01:06.401740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.838 [2024-11-27 22:01:06.421201] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:43.838 [2024-11-27 22:01:06.422492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.838 [2024-11-27 22:01:06.422545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:43.838 [2024-11-27 22:01:06.422560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.673 ms 00:28:43.839 [2024-11-27 22:01:06.422572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.839 [2024-11-27 22:01:06.440658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.839 [2024-11-27 22:01:06.440731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:28:43.839 [2024-11-27 22:01:06.440744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.040 ms 00:28:43.839 [2024-11-27 22:01:06.440759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.839 [2024-11-27 22:01:06.440882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.839 [2024-11-27 22:01:06.440896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:43.839 [2024-11-27 22:01:06.440905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:28:43.839 [2024-11-27 22:01:06.440922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.839 [2024-11-27 22:01:06.446735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.839 [2024-11-27 22:01:06.446797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:28:43.839 [2024-11-27 22:01:06.446813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.791 ms 00:28:43.839 [2024-11-27 22:01:06.446824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.839 [2024-11-27 22:01:06.452230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.839 [2024-11-27 22:01:06.452286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:28:43.839 [2024-11-27 22:01:06.452297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.354 ms 00:28:43.839 [2024-11-27 22:01:06.452307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.839 [2024-11-27 22:01:06.452655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.839 [2024-11-27 22:01:06.452712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:43.839 [2024-11-27 22:01:06.452723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.288 ms 00:28:43.839 [2024-11-27 22:01:06.452737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.839 [2024-11-27 22:01:06.493148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.839 [2024-11-27 22:01:06.493208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:28:43.839 [2024-11-27 22:01:06.493224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.386 ms 00:28:43.839 [2024-11-27 22:01:06.493235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.839 [2024-11-27 22:01:06.500161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.839 [2024-11-27 22:01:06.500220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:28:43.839 [2024-11-27 22:01:06.500232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.864 ms 00:28:43.839 [2024-11-27 22:01:06.500244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.839 [2024-11-27 22:01:06.506281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.839 [2024-11-27 22:01:06.506374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:28:43.839 [2024-11-27 22:01:06.506386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.988 ms 00:28:43.839 [2024-11-27 22:01:06.506396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.839 [2024-11-27 22:01:06.512325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.839 [2024-11-27 22:01:06.512394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:28:43.839 [2024-11-27 22:01:06.512405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.881 ms 00:28:43.839 [2024-11-27 22:01:06.512418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.839 [2024-11-27 22:01:06.512467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.839 [2024-11-27 22:01:06.512480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:43.839 [2024-11-27 22:01:06.512489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:43.839 [2024-11-27 22:01:06.512500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.839 [2024-11-27 22:01:06.512571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.839 [2024-11-27 22:01:06.512584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:43.839 [2024-11-27 22:01:06.512592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:28:43.839 [2024-11-27 22:01:06.512605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.839 [2024-11-27 22:01:06.513692] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3973.136 ms, result 0 00:28:43.839 { 00:28:43.839 "name": "ftl", 00:28:43.839 "uuid": "9ceb4166-f435-44b7-9a91-79e405710158" 00:28:43.839 } 00:28:43.839 22:01:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:28:43.839 [2024-11-27 22:01:06.755788] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:43.839 22:01:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:28:44.100 22:01:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:28:44.100 [2024-11-27 22:01:07.208261] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:44.361 22:01:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:28:44.361 [2024-11-27 22:01:07.424683] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:44.361 22:01:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:28:44.933 Fill FTL, iteration 1 00:28:44.933 22:01:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:28:44.933 22:01:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:28:44.933 22:01:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:28:44.933 22:01:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:28:44.933 22:01:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:28:44.933 22:01:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:28:44.933 22:01:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:28:44.933 22:01:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:28:44.933 22:01:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:28:44.933 22:01:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:44.933 22:01:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:28:44.933 22:01:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:28:44.933 22:01:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:44.933 22:01:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:44.933 22:01:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:44.933 22:01:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:28:44.933 22:01:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=93482 00:28:44.933 22:01:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:28:44.933 22:01:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:28:44.933 22:01:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 93482 /var/tmp/spdk.tgt.sock 00:28:44.933 22:01:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 93482 ']' 00:28:44.933 22:01:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:28:44.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:28:44.933 22:01:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:44.933 22:01:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:28:44.933 22:01:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:44.933 22:01:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:44.933 [2024-11-27 22:01:07.881758] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:28:44.933 [2024-11-27 22:01:07.882823] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93482 ] 00:28:44.933 [2024-11-27 22:01:08.028890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:45.194 [2024-11-27 22:01:08.061890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:45.765 22:01:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:45.765 22:01:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:28:45.766 22:01:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:28:46.026 ftln1 00:28:46.026 22:01:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:28:46.026 22:01:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:28:46.293 22:01:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:28:46.293 22:01:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 93482 00:28:46.293 22:01:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 93482 ']' 00:28:46.293 22:01:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 93482 00:28:46.293 22:01:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:28:46.293 22:01:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:46.293 22:01:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93482 00:28:46.293 killing process with pid 93482 00:28:46.293 22:01:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:46.293 22:01:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:46.293 22:01:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93482' 00:28:46.293 22:01:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 93482 00:28:46.293 22:01:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 93482 00:28:46.554 22:01:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:28:46.554 22:01:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:28:46.554 [2024-11-27 22:01:09.644060] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:28:46.554 [2024-11-27 22:01:09.644214] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93514 ] 00:28:46.816 [2024-11-27 22:01:09.791566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.816 [2024-11-27 22:01:09.821911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:48.203  [2024-11-27T22:01:12.262Z] Copying: 176/1024 [MB] (176 MBps) [2024-11-27T22:01:13.197Z] Copying: 375/1024 [MB] (199 MBps) [2024-11-27T22:01:14.132Z] Copying: 642/1024 [MB] (267 MBps) [2024-11-27T22:01:14.700Z] Copying: 903/1024 [MB] (261 MBps) [2024-11-27T22:01:14.700Z] Copying: 1024/1024 [MB] (average 229 MBps) 00:28:51.579 00:28:51.579 Calculate MD5 checksum, iteration 1 00:28:51.579 22:01:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:28:51.579 22:01:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:28:51.579 22:01:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:51.579 22:01:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:51.579 22:01:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:51.579 22:01:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:51.579 22:01:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:51.579 22:01:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:51.838 [2024-11-27 22:01:14.704479] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:28:51.838 [2024-11-27 22:01:14.704618] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93567 ] 00:28:51.838 [2024-11-27 22:01:14.847069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:51.838 [2024-11-27 22:01:14.867654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:53.212  [2024-11-27T22:01:16.591Z] Copying: 662/1024 [MB] (662 MBps) [2024-11-27T22:01:16.850Z] Copying: 1024/1024 [MB] (average 663 MBps) 00:28:53.729 00:28:53.729 22:01:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:28:53.729 22:01:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:56.344 22:01:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:56.344 22:01:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=5bb1228973e4682d5ab243faca2641ff 00:28:56.344 22:01:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:56.344 Fill FTL, iteration 2 00:28:56.344 22:01:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:56.344 22:01:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:28:56.344 22:01:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:56.344 22:01:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:56.344 22:01:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:56.344 22:01:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:56.344 22:01:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:56.344 22:01:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:56.344 [2024-11-27 22:01:18.939703] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:28:56.344 [2024-11-27 22:01:18.940007] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93617 ] 00:28:56.344 [2024-11-27 22:01:19.081655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.344 [2024-11-27 22:01:19.101241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:57.303  [2024-11-27T22:01:21.364Z] Copying: 182/1024 [MB] (182 MBps) [2024-11-27T22:01:22.299Z] Copying: 367/1024 [MB] (185 MBps) [2024-11-27T22:01:23.675Z] Copying: 626/1024 [MB] (259 MBps) [2024-11-27T22:01:24.245Z] Copying: 884/1024 [MB] (258 MBps) [2024-11-27T22:01:24.245Z] Copying: 1024/1024 [MB] (average 219 MBps) 00:29:01.124 00:29:01.124 22:01:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:29:01.124 22:01:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:29:01.124 Calculate MD5 checksum, iteration 2 00:29:01.124 22:01:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:01.124 22:01:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:01.124 22:01:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:01.124 22:01:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:01.124 22:01:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:01.124 22:01:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:01.124 [2024-11-27 22:01:24.147616] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:29:01.124 [2024-11-27 22:01:24.148017] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93671 ] 00:29:01.383 [2024-11-27 22:01:24.289333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.383 [2024-11-27 22:01:24.307980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:02.758  [2024-11-27T22:01:26.445Z] Copying: 622/1024 [MB] (622 MBps) [2024-11-27T22:01:27.015Z] Copying: 1024/1024 [MB] (average 626 MBps) 00:29:03.894 00:29:03.894 22:01:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:29:03.894 22:01:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:05.807 22:01:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:29:05.807 22:01:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=545bc764b1a63652ea7d86c98c529238 00:29:05.807 22:01:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:29:05.807 22:01:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:05.807 22:01:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:06.068 [2024-11-27 22:01:28.974917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:06.068 [2024-11-27 22:01:28.974965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:06.068 [2024-11-27 22:01:28.974980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:29:06.068 [2024-11-27 22:01:28.974989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:06.068 [2024-11-27 22:01:28.975008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:06.068 [2024-11-27 22:01:28.975014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:06.068 [2024-11-27 22:01:28.975021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:06.068 [2024-11-27 22:01:28.975028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:06.068 [2024-11-27 22:01:28.975043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:06.068 [2024-11-27 22:01:28.975050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:06.068 [2024-11-27 22:01:28.975059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:06.068 [2024-11-27 22:01:28.975067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:06.068 [2024-11-27 22:01:28.975121] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.189 ms, result 0 00:29:06.068 true 00:29:06.068 22:01:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:06.068 { 00:29:06.068 "name": "ftl", 00:29:06.068 "properties": [ 00:29:06.068 { 00:29:06.068 "name": "superblock_version", 00:29:06.068 "value": 5, 00:29:06.068 "read-only": true 00:29:06.068 }, 00:29:06.068 { 00:29:06.068 "name": "base_device", 00:29:06.068 "bands": [ 00:29:06.068 { 00:29:06.068 "id": 0, 00:29:06.068 "state": "FREE", 00:29:06.068 "validity": 0.0 00:29:06.068 }, 00:29:06.068 { 00:29:06.068 "id": 1, 00:29:06.068 "state": "FREE", 00:29:06.068 "validity": 0.0 00:29:06.068 }, 00:29:06.068 { 00:29:06.068 "id": 2, 00:29:06.068 "state": "FREE", 00:29:06.068 "validity": 0.0 00:29:06.068 }, 00:29:06.068 { 00:29:06.068 "id": 3, 00:29:06.068 "state": "FREE", 00:29:06.068 "validity": 0.0 00:29:06.068 }, 00:29:06.068 { 00:29:06.068 "id": 4, 00:29:06.068 "state": "FREE", 00:29:06.068 "validity": 0.0 00:29:06.068 }, 00:29:06.068 { 00:29:06.068 "id": 5, 00:29:06.068 "state": "FREE", 00:29:06.068 "validity": 0.0 00:29:06.068 }, 00:29:06.068 { 00:29:06.068 "id": 6, 00:29:06.068 "state": "FREE", 00:29:06.068 "validity": 0.0 00:29:06.068 }, 00:29:06.068 { 00:29:06.068 "id": 7, 00:29:06.068 "state": "FREE", 00:29:06.068 "validity": 0.0 00:29:06.068 }, 00:29:06.068 { 00:29:06.068 "id": 8, 00:29:06.068 "state": "FREE", 00:29:06.068 "validity": 0.0 00:29:06.068 }, 00:29:06.068 { 00:29:06.068 "id": 9, 00:29:06.068 "state": "FREE", 00:29:06.068 "validity": 0.0 00:29:06.068 }, 00:29:06.068 { 00:29:06.068 "id": 10, 00:29:06.068 "state": "FREE", 00:29:06.068 "validity": 0.0 00:29:06.068 }, 00:29:06.068 { 00:29:06.068 "id": 11, 00:29:06.068 "state": "FREE", 00:29:06.068 "validity": 0.0 00:29:06.068 }, 00:29:06.068 { 00:29:06.068 "id": 12, 00:29:06.068 "state": "FREE", 00:29:06.068 "validity": 0.0 00:29:06.068 }, 00:29:06.068 { 00:29:06.068 "id": 13, 00:29:06.068 "state": "FREE", 00:29:06.068 "validity": 0.0 00:29:06.068 }, 00:29:06.068 { 00:29:06.068 "id": 14, 00:29:06.068 "state": "FREE", 00:29:06.068 "validity": 0.0 00:29:06.068 }, 00:29:06.068 { 00:29:06.068 "id": 15, 00:29:06.068 "state": "FREE", 00:29:06.068 "validity": 0.0 00:29:06.068 }, 00:29:06.068 { 00:29:06.068 "id": 16, 00:29:06.068 "state": "FREE", 00:29:06.068 "validity": 0.0 00:29:06.068 }, 00:29:06.068 { 00:29:06.068 "id": 17, 00:29:06.068 "state": "FREE", 00:29:06.068 "validity": 0.0 00:29:06.068 } 00:29:06.068 ], 00:29:06.068 "read-only": true 00:29:06.068 }, 00:29:06.068 { 00:29:06.068 "name": "cache_device", 00:29:06.068 "type": "bdev", 00:29:06.068 "chunks": [ 00:29:06.068 { 00:29:06.068 "id": 0, 00:29:06.068 "state": "INACTIVE", 00:29:06.068 "utilization": 0.0 00:29:06.068 }, 00:29:06.068 { 00:29:06.068 "id": 1, 00:29:06.068 "state": "CLOSED", 00:29:06.068 "utilization": 1.0 00:29:06.068 }, 00:29:06.068 { 00:29:06.068 "id": 2, 00:29:06.068 "state": "CLOSED", 00:29:06.068 "utilization": 1.0 00:29:06.068 }, 00:29:06.068 { 00:29:06.068 "id": 3, 00:29:06.068 "state": "OPEN", 00:29:06.068 "utilization": 0.001953125 00:29:06.068 }, 00:29:06.068 { 00:29:06.068 "id": 4, 00:29:06.068 "state": "OPEN", 00:29:06.068 "utilization": 0.0 00:29:06.068 } 00:29:06.068 ], 00:29:06.068 "read-only": true 00:29:06.068 }, 00:29:06.068 { 00:29:06.068 "name": "verbose_mode", 00:29:06.068 "value": true, 00:29:06.069 "unit": "", 00:29:06.069 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:06.069 }, 00:29:06.069 { 00:29:06.069 "name": "prep_upgrade_on_shutdown", 00:29:06.069 "value": false, 00:29:06.069 "unit": "", 00:29:06.069 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:06.069 } 00:29:06.069 ] 00:29:06.069 } 00:29:06.339 22:01:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:29:06.339 [2024-11-27 22:01:29.383277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:06.339 [2024-11-27 22:01:29.383438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:06.339 [2024-11-27 22:01:29.383490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:06.339 [2024-11-27 22:01:29.383509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:06.339 [2024-11-27 22:01:29.383539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:06.339 [2024-11-27 22:01:29.383556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:06.339 [2024-11-27 22:01:29.383571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:06.339 [2024-11-27 22:01:29.383587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:06.339 [2024-11-27 22:01:29.383611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:06.339 [2024-11-27 22:01:29.383627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:06.339 [2024-11-27 22:01:29.383642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:06.339 [2024-11-27 22:01:29.383684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:06.339 [2024-11-27 22:01:29.383750] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.451 ms, result 0 00:29:06.339 true 00:29:06.339 22:01:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:29:06.339 22:01:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:06.339 22:01:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:29:06.609 22:01:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:29:06.609 22:01:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:29:06.609 22:01:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:06.871 [2024-11-27 22:01:29.761778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:06.871 [2024-11-27 22:01:29.761809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:06.871 [2024-11-27 22:01:29.761818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:06.871 [2024-11-27 22:01:29.761824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:06.871 [2024-11-27 22:01:29.761840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:06.871 [2024-11-27 22:01:29.761846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:06.871 [2024-11-27 22:01:29.761852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:06.871 [2024-11-27 22:01:29.761857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:06.871 [2024-11-27 22:01:29.761872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:06.871 [2024-11-27 22:01:29.761877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:06.871 [2024-11-27 22:01:29.761883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:06.871 [2024-11-27 22:01:29.761888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:06.871 [2024-11-27 22:01:29.761927] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.138 ms, result 0 00:29:06.871 true 00:29:06.871 22:01:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:06.871 { 00:29:06.871 "name": "ftl", 00:29:06.871 "properties": [ 00:29:06.871 { 00:29:06.871 "name": "superblock_version", 00:29:06.871 "value": 5, 00:29:06.871 "read-only": true 00:29:06.871 }, 00:29:06.871 { 00:29:06.871 "name": "base_device", 00:29:06.871 "bands": [ 00:29:06.871 { 00:29:06.871 "id": 0, 00:29:06.871 "state": "FREE", 00:29:06.871 "validity": 0.0 00:29:06.871 }, 00:29:06.871 { 00:29:06.871 "id": 1, 00:29:06.871 "state": "FREE", 00:29:06.871 "validity": 0.0 00:29:06.871 }, 00:29:06.871 { 00:29:06.871 "id": 2, 00:29:06.871 "state": "FREE", 00:29:06.871 "validity": 0.0 00:29:06.871 }, 00:29:06.871 { 00:29:06.871 "id": 3, 00:29:06.871 "state": "FREE", 00:29:06.871 "validity": 0.0 00:29:06.871 }, 00:29:06.871 { 00:29:06.871 "id": 4, 00:29:06.871 "state": "FREE", 00:29:06.871 "validity": 0.0 00:29:06.871 }, 00:29:06.871 { 00:29:06.871 "id": 5, 00:29:06.871 "state": "FREE", 00:29:06.871 "validity": 0.0 00:29:06.871 }, 00:29:06.871 { 00:29:06.872 "id": 6, 00:29:06.872 "state": "FREE", 00:29:06.872 "validity": 0.0 00:29:06.872 }, 00:29:06.872 { 00:29:06.872 "id": 7, 00:29:06.872 "state": "FREE", 00:29:06.872 "validity": 0.0 00:29:06.872 }, 00:29:06.872 { 00:29:06.872 "id": 8, 00:29:06.872 "state": "FREE", 00:29:06.872 "validity": 0.0 00:29:06.872 }, 00:29:06.872 { 00:29:06.872 "id": 9, 00:29:06.872 "state": "FREE", 00:29:06.872 "validity": 0.0 00:29:06.872 }, 00:29:06.872 { 00:29:06.872 "id": 10, 00:29:06.872 "state": "FREE", 00:29:06.872 "validity": 0.0 00:29:06.872 }, 00:29:06.872 { 00:29:06.872 "id": 11, 00:29:06.872 "state": "FREE", 00:29:06.872 "validity": 0.0 00:29:06.872 }, 00:29:06.872 { 00:29:06.872 "id": 12, 00:29:06.872 "state": "FREE", 00:29:06.872 "validity": 0.0 00:29:06.872 }, 00:29:06.872 { 00:29:06.872 "id": 13, 00:29:06.872 "state": "FREE", 00:29:06.872 "validity": 0.0 00:29:06.872 }, 00:29:06.872 { 00:29:06.872 "id": 14, 00:29:06.872 "state": "FREE", 00:29:06.872 "validity": 0.0 00:29:06.872 }, 00:29:06.872 { 00:29:06.872 "id": 15, 00:29:06.872 "state": "FREE", 00:29:06.872 "validity": 0.0 00:29:06.872 }, 00:29:06.872 { 00:29:06.872 "id": 16, 00:29:06.872 "state": "FREE", 00:29:06.872 "validity": 0.0 00:29:06.872 }, 00:29:06.872 { 00:29:06.872 "id": 17, 00:29:06.872 "state": "FREE", 00:29:06.872 "validity": 0.0 00:29:06.872 } 00:29:06.872 ], 00:29:06.872 "read-only": true 00:29:06.872 }, 00:29:06.872 { 00:29:06.872 "name": "cache_device", 00:29:06.872 "type": "bdev", 00:29:06.872 "chunks": [ 00:29:06.872 { 00:29:06.872 "id": 0, 00:29:06.872 "state": "INACTIVE", 00:29:06.872 "utilization": 0.0 00:29:06.872 }, 00:29:06.872 { 00:29:06.872 "id": 1, 00:29:06.872 "state": "CLOSED", 00:29:06.872 "utilization": 1.0 00:29:06.872 }, 00:29:06.872 { 00:29:06.872 "id": 2, 00:29:06.872 "state": "CLOSED", 00:29:06.872 "utilization": 1.0 00:29:06.872 }, 00:29:06.872 { 00:29:06.872 "id": 3, 00:29:06.872 "state": "OPEN", 00:29:06.872 "utilization": 0.001953125 00:29:06.872 }, 00:29:06.872 { 00:29:06.872 "id": 4, 00:29:06.872 "state": "OPEN", 00:29:06.872 "utilization": 0.0 00:29:06.872 } 00:29:06.872 ], 00:29:06.872 "read-only": true 00:29:06.872 }, 00:29:06.872 { 00:29:06.872 "name": "verbose_mode", 00:29:06.872 "value": true, 00:29:06.872 "unit": "", 00:29:06.872 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:06.872 }, 00:29:06.872 { 00:29:06.872 "name": "prep_upgrade_on_shutdown", 00:29:06.872 "value": true, 00:29:06.872 "unit": "", 00:29:06.872 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:06.872 } 00:29:06.872 ] 00:29:06.872 } 00:29:06.872 22:01:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:29:06.872 22:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 93354 ]] 00:29:06.872 22:01:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 93354 00:29:06.872 22:01:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 93354 ']' 00:29:06.872 22:01:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 93354 00:29:06.872 22:01:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:07.134 22:01:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:07.134 22:01:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93354 00:29:07.134 22:01:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:07.134 22:01:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:07.134 22:01:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93354' 00:29:07.134 killing process with pid 93354 00:29:07.134 22:01:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 93354 00:29:07.134 22:01:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 93354 00:29:07.134 [2024-11-27 22:01:30.130736] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:29:07.134 [2024-11-27 22:01:30.134701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:07.134 [2024-11-27 22:01:30.134809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:07.134 [2024-11-27 22:01:30.134857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:07.134 [2024-11-27 22:01:30.134877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:07.134 [2024-11-27 22:01:30.134909] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:07.134 [2024-11-27 22:01:30.135459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:07.134 [2024-11-27 22:01:30.135544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:07.134 [2024-11-27 22:01:30.135589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.520 ms 00:29:07.134 [2024-11-27 22:01:30.135607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.283 [2024-11-27 22:01:37.917983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.283 [2024-11-27 22:01:37.918169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:15.283 [2024-11-27 22:01:37.918222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7782.317 ms 00:29:15.283 [2024-11-27 22:01:37.918242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.283 [2024-11-27 22:01:37.919365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.283 [2024-11-27 22:01:37.919383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:15.283 [2024-11-27 22:01:37.919390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.106 ms 00:29:15.283 [2024-11-27 22:01:37.919397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.283 [2024-11-27 22:01:37.920269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.283 [2024-11-27 22:01:37.920294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:15.283 [2024-11-27 22:01:37.920302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.851 ms 00:29:15.283 [2024-11-27 22:01:37.920308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.283 [2024-11-27 22:01:37.922994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.283 [2024-11-27 22:01:37.923097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:15.283 [2024-11-27 22:01:37.923110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.646 ms 00:29:15.283 [2024-11-27 22:01:37.923116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.283 [2024-11-27 22:01:37.925580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.283 [2024-11-27 22:01:37.925610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:15.283 [2024-11-27 22:01:37.925618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.441 ms 00:29:15.283 [2024-11-27 22:01:37.925629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.283 [2024-11-27 22:01:37.925674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.283 [2024-11-27 22:01:37.925681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:15.283 [2024-11-27 22:01:37.925688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:29:15.283 [2024-11-27 22:01:37.925694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.283 [2024-11-27 22:01:37.927758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.283 [2024-11-27 22:01:37.927786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:29:15.283 [2024-11-27 22:01:37.927793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.051 ms 00:29:15.283 [2024-11-27 22:01:37.927799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.283 [2024-11-27 22:01:37.929388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.283 [2024-11-27 22:01:37.929413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:29:15.283 [2024-11-27 22:01:37.929420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.566 ms 00:29:15.283 [2024-11-27 22:01:37.929426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.283 [2024-11-27 22:01:37.931371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.283 [2024-11-27 22:01:37.931394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:15.283 [2024-11-27 22:01:37.931400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.922 ms 00:29:15.283 [2024-11-27 22:01:37.931406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.283 [2024-11-27 22:01:37.933289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.283 [2024-11-27 22:01:37.933398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:15.283 [2024-11-27 22:01:37.933410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.835 ms 00:29:15.283 [2024-11-27 22:01:37.933417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.283 [2024-11-27 22:01:37.933439] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:15.283 [2024-11-27 22:01:37.933451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:15.283 [2024-11-27 22:01:37.933459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:15.283 [2024-11-27 22:01:37.933466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:15.283 [2024-11-27 22:01:37.933473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:15.283 [2024-11-27 22:01:37.933480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:15.283 [2024-11-27 22:01:37.933486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:15.283 [2024-11-27 22:01:37.933492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:15.283 [2024-11-27 22:01:37.933498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:15.283 [2024-11-27 22:01:37.933505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:15.283 [2024-11-27 22:01:37.933511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:15.283 [2024-11-27 22:01:37.933518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:15.283 [2024-11-27 22:01:37.933524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:15.283 [2024-11-27 22:01:37.933531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:15.283 [2024-11-27 22:01:37.933537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:15.283 [2024-11-27 22:01:37.933543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:15.283 [2024-11-27 22:01:37.933549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:15.283 [2024-11-27 22:01:37.933555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:15.283 [2024-11-27 22:01:37.933561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:15.283 [2024-11-27 22:01:37.933570] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:15.283 [2024-11-27 22:01:37.933577] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 9ceb4166-f435-44b7-9a91-79e405710158 00:29:15.283 [2024-11-27 22:01:37.933584] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:15.283 [2024-11-27 22:01:37.933594] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:29:15.283 [2024-11-27 22:01:37.933600] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:29:15.283 [2024-11-27 22:01:37.933607] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:29:15.283 [2024-11-27 22:01:37.933618] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:15.283 [2024-11-27 22:01:37.933624] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:15.283 [2024-11-27 22:01:37.933630] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:15.283 [2024-11-27 22:01:37.933636] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:15.283 [2024-11-27 22:01:37.933642] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:15.283 [2024-11-27 22:01:37.933649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.283 [2024-11-27 22:01:37.933656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:15.283 [2024-11-27 22:01:37.933664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.211 ms 00:29:15.283 [2024-11-27 22:01:37.933671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.283 [2024-11-27 22:01:37.935427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.283 [2024-11-27 22:01:37.935450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:15.284 [2024-11-27 22:01:37.935458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.733 ms 00:29:15.284 [2024-11-27 22:01:37.935465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.284 [2024-11-27 22:01:37.935551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.284 [2024-11-27 22:01:37.935558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:15.284 [2024-11-27 22:01:37.935565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.073 ms 00:29:15.284 [2024-11-27 22:01:37.935570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.284 [2024-11-27 22:01:37.941510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:15.284 [2024-11-27 22:01:37.941544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:15.284 [2024-11-27 22:01:37.941556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:15.284 [2024-11-27 22:01:37.941563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.284 [2024-11-27 22:01:37.941588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:15.284 [2024-11-27 22:01:37.941595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:15.284 [2024-11-27 22:01:37.941602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:15.284 [2024-11-27 22:01:37.941608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.284 [2024-11-27 22:01:37.941650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:15.284 [2024-11-27 22:01:37.941662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:15.284 [2024-11-27 22:01:37.941668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:15.284 [2024-11-27 22:01:37.941675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.284 [2024-11-27 22:01:37.941690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:15.284 [2024-11-27 22:01:37.941696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:15.284 [2024-11-27 22:01:37.941705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:15.284 [2024-11-27 22:01:37.941711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.284 [2024-11-27 22:01:37.952756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:15.284 [2024-11-27 22:01:37.952931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:15.284 [2024-11-27 22:01:37.952944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:15.284 [2024-11-27 22:01:37.952952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.284 [2024-11-27 22:01:37.961515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:15.284 [2024-11-27 22:01:37.961547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:15.284 [2024-11-27 22:01:37.961556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:15.284 [2024-11-27 22:01:37.961562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.284 [2024-11-27 22:01:37.961627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:15.284 [2024-11-27 22:01:37.961639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:15.284 [2024-11-27 22:01:37.961646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:15.284 [2024-11-27 22:01:37.961653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.284 [2024-11-27 22:01:37.961680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:15.284 [2024-11-27 22:01:37.961688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:15.284 [2024-11-27 22:01:37.961699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:15.284 [2024-11-27 22:01:37.961706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.284 [2024-11-27 22:01:37.961762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:15.284 [2024-11-27 22:01:37.961769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:15.284 [2024-11-27 22:01:37.961778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:15.284 [2024-11-27 22:01:37.961784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.284 [2024-11-27 22:01:37.961811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:15.284 [2024-11-27 22:01:37.961820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:15.284 [2024-11-27 22:01:37.961826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:15.284 [2024-11-27 22:01:37.961832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.284 [2024-11-27 22:01:37.961869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:15.284 [2024-11-27 22:01:37.961877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:15.284 [2024-11-27 22:01:37.961886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:15.284 [2024-11-27 22:01:37.961893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.284 [2024-11-27 22:01:37.961943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:15.284 [2024-11-27 22:01:37.961952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:15.284 [2024-11-27 22:01:37.961959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:15.284 [2024-11-27 22:01:37.961965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.284 [2024-11-27 22:01:37.962076] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7827.321 ms, result 0 00:29:18.597 22:01:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:18.598 22:01:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:29:18.598 22:01:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:18.598 22:01:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:18.598 22:01:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:18.598 22:01:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=93846 00:29:18.598 22:01:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:18.598 22:01:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:18.598 22:01:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 93846 00:29:18.598 22:01:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 93846 ']' 00:29:18.598 22:01:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:18.598 22:01:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:18.598 22:01:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:18.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:18.598 22:01:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:18.598 22:01:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:18.598 [2024-11-27 22:01:41.372504] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:29:18.598 [2024-11-27 22:01:41.372652] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93846 ] 00:29:18.598 [2024-11-27 22:01:41.519917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:18.598 [2024-11-27 22:01:41.550458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:18.858 [2024-11-27 22:01:41.885091] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:18.858 [2024-11-27 22:01:41.885186] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:19.120 [2024-11-27 22:01:42.038015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.120 [2024-11-27 22:01:42.038077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:19.120 [2024-11-27 22:01:42.038097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:19.120 [2024-11-27 22:01:42.038109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.120 [2024-11-27 22:01:42.038172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.120 [2024-11-27 22:01:42.038185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:19.120 [2024-11-27 22:01:42.038194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:29:19.120 [2024-11-27 22:01:42.038206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.120 [2024-11-27 22:01:42.038234] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:19.120 [2024-11-27 22:01:42.038535] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:19.120 [2024-11-27 22:01:42.038555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.120 [2024-11-27 22:01:42.038563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:19.120 [2024-11-27 22:01:42.038572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.326 ms 00:29:19.120 [2024-11-27 22:01:42.038580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.120 [2024-11-27 22:01:42.040307] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:19.120 [2024-11-27 22:01:42.044448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.120 [2024-11-27 22:01:42.044662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:19.120 [2024-11-27 22:01:42.044683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.143 ms 00:29:19.121 [2024-11-27 22:01:42.044692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.121 [2024-11-27 22:01:42.044884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.121 [2024-11-27 22:01:42.044915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:19.121 [2024-11-27 22:01:42.044926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:29:19.121 [2024-11-27 22:01:42.044941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.121 [2024-11-27 22:01:42.053369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.121 [2024-11-27 22:01:42.053408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:19.121 [2024-11-27 22:01:42.053419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.372 ms 00:29:19.121 [2024-11-27 22:01:42.053427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.121 [2024-11-27 22:01:42.053480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.121 [2024-11-27 22:01:42.053504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:19.121 [2024-11-27 22:01:42.053513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:29:19.121 [2024-11-27 22:01:42.053521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.121 [2024-11-27 22:01:42.053597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.121 [2024-11-27 22:01:42.053608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:19.121 [2024-11-27 22:01:42.053617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:29:19.121 [2024-11-27 22:01:42.053628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.121 [2024-11-27 22:01:42.053659] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:19.121 [2024-11-27 22:01:42.055770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.121 [2024-11-27 22:01:42.055962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:19.121 [2024-11-27 22:01:42.055980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.116 ms 00:29:19.121 [2024-11-27 22:01:42.055999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.121 [2024-11-27 22:01:42.056038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.121 [2024-11-27 22:01:42.056047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:19.121 [2024-11-27 22:01:42.056055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:29:19.121 [2024-11-27 22:01:42.056066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.121 [2024-11-27 22:01:42.056092] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:19.121 [2024-11-27 22:01:42.056114] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:29:19.121 [2024-11-27 22:01:42.056152] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:19.121 [2024-11-27 22:01:42.056172] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:29:19.121 [2024-11-27 22:01:42.056282] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:19.121 [2024-11-27 22:01:42.056294] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:19.121 [2024-11-27 22:01:42.056305] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:19.121 [2024-11-27 22:01:42.056315] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:19.121 [2024-11-27 22:01:42.056325] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:19.121 [2024-11-27 22:01:42.056334] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:19.121 [2024-11-27 22:01:42.056365] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:19.121 [2024-11-27 22:01:42.056374] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:19.121 [2024-11-27 22:01:42.056382] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:19.121 [2024-11-27 22:01:42.056394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.121 [2024-11-27 22:01:42.056402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:19.121 [2024-11-27 22:01:42.056411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.305 ms 00:29:19.121 [2024-11-27 22:01:42.056424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.121 [2024-11-27 22:01:42.056510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.121 [2024-11-27 22:01:42.056521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:19.121 [2024-11-27 22:01:42.056530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:29:19.121 [2024-11-27 22:01:42.056542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.121 [2024-11-27 22:01:42.056652] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:19.121 [2024-11-27 22:01:42.056666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:19.121 [2024-11-27 22:01:42.056676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:19.121 [2024-11-27 22:01:42.056686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:19.121 [2024-11-27 22:01:42.056695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:19.121 [2024-11-27 22:01:42.056702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:19.121 [2024-11-27 22:01:42.056710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:19.121 [2024-11-27 22:01:42.056719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:19.121 [2024-11-27 22:01:42.056727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:19.121 [2024-11-27 22:01:42.056734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:19.121 [2024-11-27 22:01:42.056742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:19.121 [2024-11-27 22:01:42.056750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:19.121 [2024-11-27 22:01:42.056758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:19.121 [2024-11-27 22:01:42.056765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:19.121 [2024-11-27 22:01:42.056773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:19.121 [2024-11-27 22:01:42.056795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:19.121 [2024-11-27 22:01:42.056805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:19.121 [2024-11-27 22:01:42.056826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:19.121 [2024-11-27 22:01:42.056834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:19.121 [2024-11-27 22:01:42.056842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:19.121 [2024-11-27 22:01:42.056850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:19.121 [2024-11-27 22:01:42.056858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:19.121 [2024-11-27 22:01:42.056866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:19.121 [2024-11-27 22:01:42.056874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:19.121 [2024-11-27 22:01:42.056883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:19.121 [2024-11-27 22:01:42.056891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:19.121 [2024-11-27 22:01:42.056899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:19.121 [2024-11-27 22:01:42.056906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:19.121 [2024-11-27 22:01:42.056914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:19.121 [2024-11-27 22:01:42.056922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:19.121 [2024-11-27 22:01:42.056931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:19.121 [2024-11-27 22:01:42.056942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:19.121 [2024-11-27 22:01:42.056949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:19.121 [2024-11-27 22:01:42.056955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:19.121 [2024-11-27 22:01:42.056963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:19.121 [2024-11-27 22:01:42.056969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:19.121 [2024-11-27 22:01:42.056975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:19.121 [2024-11-27 22:01:42.056982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:19.121 [2024-11-27 22:01:42.056990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:19.121 [2024-11-27 22:01:42.056997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:19.121 [2024-11-27 22:01:42.057005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:19.121 [2024-11-27 22:01:42.057011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:19.121 [2024-11-27 22:01:42.057018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:19.121 [2024-11-27 22:01:42.057025] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:19.121 [2024-11-27 22:01:42.057033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:19.121 [2024-11-27 22:01:42.057041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:19.121 [2024-11-27 22:01:42.057049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:19.121 [2024-11-27 22:01:42.057065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:19.121 [2024-11-27 22:01:42.057074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:19.121 [2024-11-27 22:01:42.057081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:19.121 [2024-11-27 22:01:42.057088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:19.121 [2024-11-27 22:01:42.057094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:19.121 [2024-11-27 22:01:42.057101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:19.121 [2024-11-27 22:01:42.057110] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:19.121 [2024-11-27 22:01:42.057120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:19.121 [2024-11-27 22:01:42.057129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:19.121 [2024-11-27 22:01:42.057136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:19.122 [2024-11-27 22:01:42.057143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:19.122 [2024-11-27 22:01:42.057151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:19.122 [2024-11-27 22:01:42.057160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:19.122 [2024-11-27 22:01:42.057167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:19.122 [2024-11-27 22:01:42.057174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:19.122 [2024-11-27 22:01:42.057181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:19.122 [2024-11-27 22:01:42.057191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:19.122 [2024-11-27 22:01:42.057199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:19.122 [2024-11-27 22:01:42.057207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:19.122 [2024-11-27 22:01:42.057214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:19.122 [2024-11-27 22:01:42.057221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:19.122 [2024-11-27 22:01:42.057230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:19.122 [2024-11-27 22:01:42.057238] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:19.122 [2024-11-27 22:01:42.057246] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:19.122 [2024-11-27 22:01:42.057258] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:19.122 [2024-11-27 22:01:42.057266] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:19.122 [2024-11-27 22:01:42.057273] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:19.122 [2024-11-27 22:01:42.057287] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:19.122 [2024-11-27 22:01:42.057295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.122 [2024-11-27 22:01:42.057303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:19.122 [2024-11-27 22:01:42.057311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.712 ms 00:29:19.122 [2024-11-27 22:01:42.057318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.122 [2024-11-27 22:01:42.057382] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:19.122 [2024-11-27 22:01:42.057397] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:23.328 [2024-11-27 22:01:46.186178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.328 [2024-11-27 22:01:46.186265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:23.329 [2024-11-27 22:01:46.186292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4128.781 ms 00:29:23.329 [2024-11-27 22:01:46.186302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.329 [2024-11-27 22:01:46.199891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.329 [2024-11-27 22:01:46.199946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:23.329 [2024-11-27 22:01:46.199961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.443 ms 00:29:23.329 [2024-11-27 22:01:46.199970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.329 [2024-11-27 22:01:46.200061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.329 [2024-11-27 22:01:46.200073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:23.329 [2024-11-27 22:01:46.200090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:29:23.329 [2024-11-27 22:01:46.200106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.329 [2024-11-27 22:01:46.213386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.329 [2024-11-27 22:01:46.213604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:23.329 [2024-11-27 22:01:46.213628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.240 ms 00:29:23.329 [2024-11-27 22:01:46.213638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.329 [2024-11-27 22:01:46.213677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.329 [2024-11-27 22:01:46.213693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:23.329 [2024-11-27 22:01:46.213703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:23.329 [2024-11-27 22:01:46.213711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.329 [2024-11-27 22:01:46.214279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.329 [2024-11-27 22:01:46.214313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:23.329 [2024-11-27 22:01:46.214325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.506 ms 00:29:23.329 [2024-11-27 22:01:46.214358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.329 [2024-11-27 22:01:46.214420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.329 [2024-11-27 22:01:46.214438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:23.329 [2024-11-27 22:01:46.214460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:29:23.329 [2024-11-27 22:01:46.214470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.329 [2024-11-27 22:01:46.222974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.329 [2024-11-27 22:01:46.223016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:23.329 [2024-11-27 22:01:46.223043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.476 ms 00:29:23.329 [2024-11-27 22:01:46.223052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.329 [2024-11-27 22:01:46.236437] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:29:23.329 [2024-11-27 22:01:46.236497] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:23.329 [2024-11-27 22:01:46.236513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.329 [2024-11-27 22:01:46.236523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:29:23.329 [2024-11-27 22:01:46.236533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.356 ms 00:29:23.329 [2024-11-27 22:01:46.236541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.329 [2024-11-27 22:01:46.241485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.329 [2024-11-27 22:01:46.241533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:29:23.329 [2024-11-27 22:01:46.241547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.888 ms 00:29:23.329 [2024-11-27 22:01:46.241557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.329 [2024-11-27 22:01:46.244072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.329 [2024-11-27 22:01:46.244280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:29:23.329 [2024-11-27 22:01:46.244302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.457 ms 00:29:23.329 [2024-11-27 22:01:46.244311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.329 [2024-11-27 22:01:46.246919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.329 [2024-11-27 22:01:46.246958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:29:23.329 [2024-11-27 22:01:46.246970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.539 ms 00:29:23.329 [2024-11-27 22:01:46.246978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.329 [2024-11-27 22:01:46.247544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.329 [2024-11-27 22:01:46.247607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:23.329 [2024-11-27 22:01:46.247634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.470 ms 00:29:23.329 [2024-11-27 22:01:46.247662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.329 [2024-11-27 22:01:46.269750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.329 [2024-11-27 22:01:46.269955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:29:23.329 [2024-11-27 22:01:46.270152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.043 ms 00:29:23.329 [2024-11-27 22:01:46.270194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.329 [2024-11-27 22:01:46.278595] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:23.329 [2024-11-27 22:01:46.279723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.329 [2024-11-27 22:01:46.279871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:23.329 [2024-11-27 22:01:46.279953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.463 ms 00:29:23.329 [2024-11-27 22:01:46.279984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.329 [2024-11-27 22:01:46.280089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.329 [2024-11-27 22:01:46.280118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:29:23.329 [2024-11-27 22:01:46.280204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:29:23.329 [2024-11-27 22:01:46.280238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.329 [2024-11-27 22:01:46.280315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.329 [2024-11-27 22:01:46.280370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:23.329 [2024-11-27 22:01:46.280438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:29:23.329 [2024-11-27 22:01:46.280475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.329 [2024-11-27 22:01:46.280542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.329 [2024-11-27 22:01:46.280566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:23.329 [2024-11-27 22:01:46.280587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:29:23.329 [2024-11-27 22:01:46.280606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.329 [2024-11-27 22:01:46.280665] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:23.329 [2024-11-27 22:01:46.280690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.329 [2024-11-27 22:01:46.280711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:23.329 [2024-11-27 22:01:46.280738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:29:23.329 [2024-11-27 22:01:46.280757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.329 [2024-11-27 22:01:46.285532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.329 [2024-11-27 22:01:46.285688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:23.329 [2024-11-27 22:01:46.285768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.739 ms 00:29:23.329 [2024-11-27 22:01:46.285798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.329 [2024-11-27 22:01:46.285898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.329 [2024-11-27 22:01:46.285974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:23.329 [2024-11-27 22:01:46.286017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:29:23.329 [2024-11-27 22:01:46.286038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.329 [2024-11-27 22:01:46.287666] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4249.172 ms, result 0 00:29:23.329 [2024-11-27 22:01:46.300701] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:23.329 [2024-11-27 22:01:46.316741] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:23.329 [2024-11-27 22:01:46.324859] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:23.329 22:01:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:23.329 22:01:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:23.329 22:01:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:23.329 22:01:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:23.329 22:01:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:23.591 [2024-11-27 22:01:46.572891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.591 [2024-11-27 22:01:46.573115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:23.591 [2024-11-27 22:01:46.573142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:29:23.591 [2024-11-27 22:01:46.573152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.591 [2024-11-27 22:01:46.573190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.591 [2024-11-27 22:01:46.573212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:23.591 [2024-11-27 22:01:46.573221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:23.591 [2024-11-27 22:01:46.573229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.591 [2024-11-27 22:01:46.573251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.591 [2024-11-27 22:01:46.573260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:23.591 [2024-11-27 22:01:46.573273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:23.591 [2024-11-27 22:01:46.573281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.591 [2024-11-27 22:01:46.573371] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.442 ms, result 0 00:29:23.591 true 00:29:23.591 22:01:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:23.852 { 00:29:23.852 "name": "ftl", 00:29:23.852 "properties": [ 00:29:23.852 { 00:29:23.852 "name": "superblock_version", 00:29:23.852 "value": 5, 00:29:23.852 "read-only": true 00:29:23.852 }, 00:29:23.852 { 00:29:23.852 "name": "base_device", 00:29:23.852 "bands": [ 00:29:23.852 { 00:29:23.852 "id": 0, 00:29:23.852 "state": "CLOSED", 00:29:23.852 "validity": 1.0 00:29:23.852 }, 00:29:23.852 { 00:29:23.852 "id": 1, 00:29:23.852 "state": "CLOSED", 00:29:23.852 "validity": 1.0 00:29:23.852 }, 00:29:23.852 { 00:29:23.852 "id": 2, 00:29:23.852 "state": "CLOSED", 00:29:23.852 "validity": 0.007843137254901933 00:29:23.852 }, 00:29:23.852 { 00:29:23.852 "id": 3, 00:29:23.852 "state": "FREE", 00:29:23.852 "validity": 0.0 00:29:23.852 }, 00:29:23.852 { 00:29:23.852 "id": 4, 00:29:23.852 "state": "FREE", 00:29:23.852 "validity": 0.0 00:29:23.852 }, 00:29:23.852 { 00:29:23.852 "id": 5, 00:29:23.852 "state": "FREE", 00:29:23.852 "validity": 0.0 00:29:23.852 }, 00:29:23.852 { 00:29:23.852 "id": 6, 00:29:23.852 "state": "FREE", 00:29:23.852 "validity": 0.0 00:29:23.852 }, 00:29:23.852 { 00:29:23.852 "id": 7, 00:29:23.852 "state": "FREE", 00:29:23.852 "validity": 0.0 00:29:23.852 }, 00:29:23.852 { 00:29:23.852 "id": 8, 00:29:23.852 "state": "FREE", 00:29:23.852 "validity": 0.0 00:29:23.852 }, 00:29:23.852 { 00:29:23.852 "id": 9, 00:29:23.852 "state": "FREE", 00:29:23.852 "validity": 0.0 00:29:23.852 }, 00:29:23.852 { 00:29:23.852 "id": 10, 00:29:23.852 "state": "FREE", 00:29:23.852 "validity": 0.0 00:29:23.852 }, 00:29:23.852 { 00:29:23.852 "id": 11, 00:29:23.852 "state": "FREE", 00:29:23.852 "validity": 0.0 00:29:23.852 }, 00:29:23.852 { 00:29:23.852 "id": 12, 00:29:23.852 "state": "FREE", 00:29:23.852 "validity": 0.0 00:29:23.852 }, 00:29:23.852 { 00:29:23.852 "id": 13, 00:29:23.852 "state": "FREE", 00:29:23.852 "validity": 0.0 00:29:23.852 }, 00:29:23.852 { 00:29:23.852 "id": 14, 00:29:23.852 "state": "FREE", 00:29:23.852 "validity": 0.0 00:29:23.852 }, 00:29:23.852 { 00:29:23.852 "id": 15, 00:29:23.852 "state": "FREE", 00:29:23.852 "validity": 0.0 00:29:23.852 }, 00:29:23.852 { 00:29:23.852 "id": 16, 00:29:23.852 "state": "FREE", 00:29:23.852 "validity": 0.0 00:29:23.852 }, 00:29:23.852 { 00:29:23.852 "id": 17, 00:29:23.852 "state": "FREE", 00:29:23.852 "validity": 0.0 00:29:23.852 } 00:29:23.852 ], 00:29:23.852 "read-only": true 00:29:23.852 }, 00:29:23.852 { 00:29:23.852 "name": "cache_device", 00:29:23.852 "type": "bdev", 00:29:23.852 "chunks": [ 00:29:23.852 { 00:29:23.852 "id": 0, 00:29:23.852 "state": "INACTIVE", 00:29:23.852 "utilization": 0.0 00:29:23.852 }, 00:29:23.852 { 00:29:23.852 "id": 1, 00:29:23.852 "state": "OPEN", 00:29:23.852 "utilization": 0.0 00:29:23.852 }, 00:29:23.852 { 00:29:23.852 "id": 2, 00:29:23.852 "state": "OPEN", 00:29:23.852 "utilization": 0.0 00:29:23.852 }, 00:29:23.852 { 00:29:23.852 "id": 3, 00:29:23.852 "state": "FREE", 00:29:23.852 "utilization": 0.0 00:29:23.852 }, 00:29:23.852 { 00:29:23.852 "id": 4, 00:29:23.852 "state": "FREE", 00:29:23.852 "utilization": 0.0 00:29:23.852 } 00:29:23.852 ], 00:29:23.852 "read-only": true 00:29:23.852 }, 00:29:23.852 { 00:29:23.852 "name": "verbose_mode", 00:29:23.852 "value": true, 00:29:23.852 "unit": "", 00:29:23.852 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:23.852 }, 00:29:23.852 { 00:29:23.852 "name": "prep_upgrade_on_shutdown", 00:29:23.852 "value": false, 00:29:23.852 "unit": "", 00:29:23.852 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:23.852 } 00:29:23.852 ] 00:29:23.852 } 00:29:23.852 22:01:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:29:23.852 22:01:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:29:23.852 22:01:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:24.113 22:01:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:29:24.113 22:01:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:29:24.113 22:01:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:29:24.113 22:01:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:29:24.113 22:01:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:24.374 Validate MD5 checksum, iteration 1 00:29:24.374 22:01:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:29:24.374 22:01:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:29:24.374 22:01:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:29:24.374 22:01:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:24.374 22:01:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:24.375 22:01:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:24.375 22:01:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:24.375 22:01:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:24.375 22:01:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:24.375 22:01:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:24.375 22:01:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:24.375 22:01:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:24.375 22:01:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:24.375 [2024-11-27 22:01:47.311046] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:29:24.375 [2024-11-27 22:01:47.311371] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93925 ] 00:29:24.375 [2024-11-27 22:01:47.449693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.375 [2024-11-27 22:01:47.478291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.767  [2024-11-27T22:01:50.277Z] Copying: 502/1024 [MB] (502 MBps) [2024-11-27T22:01:50.277Z] Copying: 976/1024 [MB] (474 MBps) [2024-11-27T22:01:50.846Z] Copying: 1024/1024 [MB] (average 493 MBps) 00:29:27.725 00:29:27.725 22:01:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:27.725 22:01:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:29.648 22:01:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:29.648 Validate MD5 checksum, iteration 2 00:29:29.648 22:01:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=5bb1228973e4682d5ab243faca2641ff 00:29:29.648 22:01:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 5bb1228973e4682d5ab243faca2641ff != \5\b\b\1\2\2\8\9\7\3\e\4\6\8\2\d\5\a\b\2\4\3\f\a\c\a\2\6\4\1\f\f ]] 00:29:29.648 22:01:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:29.648 22:01:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:29.648 22:01:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:29.648 22:01:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:29.648 22:01:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:29.648 22:01:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:29.648 22:01:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:29.648 22:01:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:29.648 22:01:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:29.648 [2024-11-27 22:01:52.683972] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:29:29.648 [2024-11-27 22:01:52.684065] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93982 ] 00:29:29.907 [2024-11-27 22:01:52.822794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:29.908 [2024-11-27 22:01:52.839043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:31.283  [2024-11-27T22:01:55.054Z] Copying: 671/1024 [MB] (671 MBps) [2024-11-27T22:01:59.251Z] Copying: 1024/1024 [MB] (average 659 MBps) 00:29:36.130 00:29:36.130 22:01:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:29:36.131 22:01:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:38.038 22:02:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:38.038 22:02:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=545bc764b1a63652ea7d86c98c529238 00:29:38.038 22:02:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 545bc764b1a63652ea7d86c98c529238 != \5\4\5\b\c\7\6\4\b\1\a\6\3\6\5\2\e\a\7\d\8\6\c\9\8\c\5\2\9\2\3\8 ]] 00:29:38.038 22:02:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:38.038 22:02:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:38.038 22:02:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:29:38.038 22:02:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 93846 ]] 00:29:38.038 22:02:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 93846 00:29:38.038 22:02:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:29:38.038 22:02:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:29:38.038 22:02:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:38.038 22:02:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:38.038 22:02:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:38.038 22:02:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:38.038 22:02:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=94071 00:29:38.038 22:02:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:38.038 22:02:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 94071 00:29:38.038 22:02:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 94071 ']' 00:29:38.038 22:02:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:38.038 22:02:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:38.038 22:02:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:38.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:38.038 22:02:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:38.038 22:02:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:38.038 [2024-11-27 22:02:01.158705] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:29:38.038 [2024-11-27 22:02:01.158931] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94071 ] 00:29:38.299 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 93846 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:29:38.299 [2024-11-27 22:02:01.301925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.299 [2024-11-27 22:02:01.321417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:38.559 [2024-11-27 22:02:01.601501] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:38.559 [2024-11-27 22:02:01.601577] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:38.822 [2024-11-27 22:02:01.754879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.822 [2024-11-27 22:02:01.754938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:38.822 [2024-11-27 22:02:01.754956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:38.822 [2024-11-27 22:02:01.754965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.822 [2024-11-27 22:02:01.755024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.822 [2024-11-27 22:02:01.755038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:38.822 [2024-11-27 22:02:01.755046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:29:38.822 [2024-11-27 22:02:01.755054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.822 [2024-11-27 22:02:01.755077] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:38.822 [2024-11-27 22:02:01.755372] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:38.822 [2024-11-27 22:02:01.755391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.822 [2024-11-27 22:02:01.755399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:38.822 [2024-11-27 22:02:01.755408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.319 ms 00:29:38.822 [2024-11-27 22:02:01.755416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.822 [2024-11-27 22:02:01.755743] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:38.822 [2024-11-27 22:02:01.761725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.822 [2024-11-27 22:02:01.761786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:38.822 [2024-11-27 22:02:01.761807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.981 ms 00:29:38.822 [2024-11-27 22:02:01.761815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.822 [2024-11-27 22:02:01.763368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.822 [2024-11-27 22:02:01.763408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:38.822 [2024-11-27 22:02:01.763419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.066 ms 00:29:38.822 [2024-11-27 22:02:01.763431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.822 [2024-11-27 22:02:01.763732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.822 [2024-11-27 22:02:01.763744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:38.822 [2024-11-27 22:02:01.763753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.236 ms 00:29:38.822 [2024-11-27 22:02:01.763761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.822 [2024-11-27 22:02:01.763798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.822 [2024-11-27 22:02:01.763808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:38.822 [2024-11-27 22:02:01.763816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:29:38.822 [2024-11-27 22:02:01.763824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.822 [2024-11-27 22:02:01.763857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.822 [2024-11-27 22:02:01.763874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:38.822 [2024-11-27 22:02:01.763884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:29:38.822 [2024-11-27 22:02:01.763896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.822 [2024-11-27 22:02:01.763937] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:38.822 [2024-11-27 22:02:01.765262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.822 [2024-11-27 22:02:01.765302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:38.822 [2024-11-27 22:02:01.765312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.328 ms 00:29:38.822 [2024-11-27 22:02:01.765319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.822 [2024-11-27 22:02:01.765375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.822 [2024-11-27 22:02:01.765390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:38.822 [2024-11-27 22:02:01.765399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:38.822 [2024-11-27 22:02:01.765407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.822 [2024-11-27 22:02:01.765431] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:38.822 [2024-11-27 22:02:01.765453] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:29:38.822 [2024-11-27 22:02:01.765490] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:38.823 [2024-11-27 22:02:01.765509] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:29:38.823 [2024-11-27 22:02:01.765616] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:38.823 [2024-11-27 22:02:01.765627] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:38.823 [2024-11-27 22:02:01.765638] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:38.823 [2024-11-27 22:02:01.765648] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:38.823 [2024-11-27 22:02:01.765657] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:38.823 [2024-11-27 22:02:01.765666] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:38.823 [2024-11-27 22:02:01.765674] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:38.823 [2024-11-27 22:02:01.765681] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:38.823 [2024-11-27 22:02:01.765688] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:38.823 [2024-11-27 22:02:01.765696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.823 [2024-11-27 22:02:01.765706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:38.823 [2024-11-27 22:02:01.765719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.268 ms 00:29:38.823 [2024-11-27 22:02:01.765730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.823 [2024-11-27 22:02:01.765815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.823 [2024-11-27 22:02:01.765827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:38.823 [2024-11-27 22:02:01.765836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:29:38.823 [2024-11-27 22:02:01.765844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.823 [2024-11-27 22:02:01.765944] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:38.823 [2024-11-27 22:02:01.765955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:38.823 [2024-11-27 22:02:01.765965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:38.823 [2024-11-27 22:02:01.765977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:38.823 [2024-11-27 22:02:01.765986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:38.823 [2024-11-27 22:02:01.765995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:38.823 [2024-11-27 22:02:01.766003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:38.823 [2024-11-27 22:02:01.766012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:38.823 [2024-11-27 22:02:01.766020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:38.823 [2024-11-27 22:02:01.766027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:38.823 [2024-11-27 22:02:01.766035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:38.823 [2024-11-27 22:02:01.766043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:38.823 [2024-11-27 22:02:01.766052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:38.823 [2024-11-27 22:02:01.766065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:38.823 [2024-11-27 22:02:01.766075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:38.823 [2024-11-27 22:02:01.766083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:38.823 [2024-11-27 22:02:01.766091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:38.823 [2024-11-27 22:02:01.766099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:38.823 [2024-11-27 22:02:01.766107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:38.823 [2024-11-27 22:02:01.766115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:38.823 [2024-11-27 22:02:01.766122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:38.823 [2024-11-27 22:02:01.766130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:38.823 [2024-11-27 22:02:01.766137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:38.823 [2024-11-27 22:02:01.766145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:38.823 [2024-11-27 22:02:01.766153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:38.823 [2024-11-27 22:02:01.766161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:38.823 [2024-11-27 22:02:01.766168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:38.823 [2024-11-27 22:02:01.766176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:38.823 [2024-11-27 22:02:01.766184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:38.823 [2024-11-27 22:02:01.766193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:38.823 [2024-11-27 22:02:01.766202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:38.823 [2024-11-27 22:02:01.766209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:38.823 [2024-11-27 22:02:01.766217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:38.823 [2024-11-27 22:02:01.766224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:38.823 [2024-11-27 22:02:01.766231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:38.823 [2024-11-27 22:02:01.766239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:38.823 [2024-11-27 22:02:01.766247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:38.823 [2024-11-27 22:02:01.766255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:38.823 [2024-11-27 22:02:01.766263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:38.823 [2024-11-27 22:02:01.766270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:38.823 [2024-11-27 22:02:01.766277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:38.823 [2024-11-27 22:02:01.766285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:38.823 [2024-11-27 22:02:01.766293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:38.823 [2024-11-27 22:02:01.766301] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:38.823 [2024-11-27 22:02:01.766310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:38.823 [2024-11-27 22:02:01.766321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:38.823 [2024-11-27 22:02:01.766348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:38.823 [2024-11-27 22:02:01.766358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:38.823 [2024-11-27 22:02:01.766367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:38.823 [2024-11-27 22:02:01.766375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:38.823 [2024-11-27 22:02:01.766383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:38.823 [2024-11-27 22:02:01.766391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:38.823 [2024-11-27 22:02:01.766399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:38.823 [2024-11-27 22:02:01.766409] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:38.823 [2024-11-27 22:02:01.766419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:38.823 [2024-11-27 22:02:01.766427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:38.823 [2024-11-27 22:02:01.766435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:38.823 [2024-11-27 22:02:01.766442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:38.823 [2024-11-27 22:02:01.766450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:38.823 [2024-11-27 22:02:01.766458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:38.823 [2024-11-27 22:02:01.766465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:38.824 [2024-11-27 22:02:01.766476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:38.824 [2024-11-27 22:02:01.766483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:38.824 [2024-11-27 22:02:01.766490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:38.824 [2024-11-27 22:02:01.766498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:38.824 [2024-11-27 22:02:01.766505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:38.824 [2024-11-27 22:02:01.766513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:38.824 [2024-11-27 22:02:01.766520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:38.824 [2024-11-27 22:02:01.766527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:38.824 [2024-11-27 22:02:01.766535] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:38.824 [2024-11-27 22:02:01.766544] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:38.824 [2024-11-27 22:02:01.766561] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:38.824 [2024-11-27 22:02:01.766575] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:38.824 [2024-11-27 22:02:01.766583] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:38.824 [2024-11-27 22:02:01.766590] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:38.824 [2024-11-27 22:02:01.766597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.824 [2024-11-27 22:02:01.766607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:38.824 [2024-11-27 22:02:01.766618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.721 ms 00:29:38.824 [2024-11-27 22:02:01.766630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.824 [2024-11-27 22:02:01.777977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.824 [2024-11-27 22:02:01.778022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:38.824 [2024-11-27 22:02:01.778034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.293 ms 00:29:38.824 [2024-11-27 22:02:01.778046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.824 [2024-11-27 22:02:01.778089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.824 [2024-11-27 22:02:01.778098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:38.824 [2024-11-27 22:02:01.778109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:29:38.824 [2024-11-27 22:02:01.778117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.824 [2024-11-27 22:02:01.791484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.824 [2024-11-27 22:02:01.791767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:38.824 [2024-11-27 22:02:01.791845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.303 ms 00:29:38.824 [2024-11-27 22:02:01.791870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.824 [2024-11-27 22:02:01.791942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.824 [2024-11-27 22:02:01.791986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:38.824 [2024-11-27 22:02:01.792012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:38.824 [2024-11-27 22:02:01.792036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.824 [2024-11-27 22:02:01.792165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.824 [2024-11-27 22:02:01.792196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:38.824 [2024-11-27 22:02:01.792219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:29:38.824 [2024-11-27 22:02:01.792325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.824 [2024-11-27 22:02:01.792412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.824 [2024-11-27 22:02:01.792443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:38.824 [2024-11-27 22:02:01.792464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:29:38.824 [2024-11-27 22:02:01.792487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.824 [2024-11-27 22:02:01.799856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.824 [2024-11-27 22:02:01.799963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:38.824 [2024-11-27 22:02:01.800011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.337 ms 00:29:38.824 [2024-11-27 22:02:01.800039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.824 [2024-11-27 22:02:01.800134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.824 [2024-11-27 22:02:01.800159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:29:38.824 [2024-11-27 22:02:01.800182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:38.824 [2024-11-27 22:02:01.800201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.824 [2024-11-27 22:02:01.817422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.824 [2024-11-27 22:02:01.817570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:29:38.824 [2024-11-27 22:02:01.817648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.191 ms 00:29:38.824 [2024-11-27 22:02:01.817675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.824 [2024-11-27 22:02:01.819011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.824 [2024-11-27 22:02:01.819123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:38.824 [2024-11-27 22:02:01.819188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.304 ms 00:29:38.824 [2024-11-27 22:02:01.819215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.824 [2024-11-27 22:02:01.834886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.824 [2024-11-27 22:02:01.835037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:29:38.824 [2024-11-27 22:02:01.835090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.603 ms 00:29:38.824 [2024-11-27 22:02:01.835113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.824 [2024-11-27 22:02:01.835276] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:29:38.824 [2024-11-27 22:02:01.835472] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:29:38.824 [2024-11-27 22:02:01.835553] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:29:38.824 [2024-11-27 22:02:01.835629] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:29:38.824 [2024-11-27 22:02:01.835638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.824 [2024-11-27 22:02:01.835646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:29:38.824 [2024-11-27 22:02:01.835658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.443 ms 00:29:38.824 [2024-11-27 22:02:01.835665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.824 [2024-11-27 22:02:01.835718] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:29:38.824 [2024-11-27 22:02:01.835729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.824 [2024-11-27 22:02:01.835737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:29:38.824 [2024-11-27 22:02:01.835745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:29:38.824 [2024-11-27 22:02:01.835755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.824 [2024-11-27 22:02:01.839044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.825 [2024-11-27 22:02:01.839078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:29:38.825 [2024-11-27 22:02:01.839088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.269 ms 00:29:38.825 [2024-11-27 22:02:01.839099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.825 [2024-11-27 22:02:01.839726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.825 [2024-11-27 22:02:01.839750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:29:38.825 [2024-11-27 22:02:01.839761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:29:38.825 [2024-11-27 22:02:01.839769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:38.825 [2024-11-27 22:02:01.839831] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:29:38.825 [2024-11-27 22:02:01.839977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.825 [2024-11-27 22:02:01.839993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:29:38.825 [2024-11-27 22:02:01.840007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.147 ms 00:29:38.825 [2024-11-27 22:02:01.840015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.396 [2024-11-27 22:02:02.426028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.396 [2024-11-27 22:02:02.426249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:29:39.396 [2024-11-27 22:02:02.426271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 585.715 ms 00:29:39.396 [2024-11-27 22:02:02.426280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.396 [2024-11-27 22:02:02.427780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.396 [2024-11-27 22:02:02.427819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:29:39.396 [2024-11-27 22:02:02.427828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.118 ms 00:29:39.396 [2024-11-27 22:02:02.427840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.396 [2024-11-27 22:02:02.428219] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:29:39.396 [2024-11-27 22:02:02.428243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.396 [2024-11-27 22:02:02.428252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:29:39.396 [2024-11-27 22:02:02.428262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.377 ms 00:29:39.396 [2024-11-27 22:02:02.428275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.396 [2024-11-27 22:02:02.428305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.396 [2024-11-27 22:02:02.428318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:29:39.396 [2024-11-27 22:02:02.428326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:39.396 [2024-11-27 22:02:02.428346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.396 [2024-11-27 22:02:02.428379] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 588.546 ms, result 0 00:29:39.396 [2024-11-27 22:02:02.428424] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:29:39.396 [2024-11-27 22:02:02.428482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.396 [2024-11-27 22:02:02.428493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:29:39.396 [2024-11-27 22:02:02.428501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.059 ms 00:29:39.396 [2024-11-27 22:02:02.428508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.968 [2024-11-27 22:02:03.046496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.968 [2024-11-27 22:02:03.046580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:29:39.968 [2024-11-27 22:02:03.046598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 617.588 ms 00:29:39.968 [2024-11-27 22:02:03.046608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.968 [2024-11-27 22:02:03.048363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.968 [2024-11-27 22:02:03.048411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:29:39.968 [2024-11-27 22:02:03.048422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.177 ms 00:29:39.968 [2024-11-27 22:02:03.048431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.968 [2024-11-27 22:02:03.049091] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:29:39.968 [2024-11-27 22:02:03.049130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.968 [2024-11-27 22:02:03.049139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:29:39.968 [2024-11-27 22:02:03.049150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.665 ms 00:29:39.968 [2024-11-27 22:02:03.049158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.968 [2024-11-27 22:02:03.049253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.968 [2024-11-27 22:02:03.049264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:29:39.968 [2024-11-27 22:02:03.049273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:39.968 [2024-11-27 22:02:03.049282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.968 [2024-11-27 22:02:03.049322] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 620.894 ms, result 0 00:29:39.968 [2024-11-27 22:02:03.049387] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:39.968 [2024-11-27 22:02:03.049400] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:39.968 [2024-11-27 22:02:03.049410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.968 [2024-11-27 22:02:03.049420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:29:39.968 [2024-11-27 22:02:03.049430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1209.593 ms 00:29:39.968 [2024-11-27 22:02:03.049443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.968 [2024-11-27 22:02:03.049477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.968 [2024-11-27 22:02:03.049486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:29:39.968 [2024-11-27 22:02:03.049495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:39.968 [2024-11-27 22:02:03.049503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.968 [2024-11-27 22:02:03.058653] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:39.968 [2024-11-27 22:02:03.058798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.968 [2024-11-27 22:02:03.058813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:39.968 [2024-11-27 22:02:03.058824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.277 ms 00:29:39.968 [2024-11-27 22:02:03.058833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.968 [2024-11-27 22:02:03.059583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.968 [2024-11-27 22:02:03.059609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:29:39.968 [2024-11-27 22:02:03.059620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.668 ms 00:29:39.968 [2024-11-27 22:02:03.059631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.968 [2024-11-27 22:02:03.061869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.968 [2024-11-27 22:02:03.061912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:29:39.968 [2024-11-27 22:02:03.061934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.220 ms 00:29:39.968 [2024-11-27 22:02:03.061943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.968 [2024-11-27 22:02:03.061992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.968 [2024-11-27 22:02:03.062002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:29:39.968 [2024-11-27 22:02:03.062011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:39.968 [2024-11-27 22:02:03.062019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.968 [2024-11-27 22:02:03.062130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.968 [2024-11-27 22:02:03.062145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:39.968 [2024-11-27 22:02:03.062157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:29:39.968 [2024-11-27 22:02:03.062165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.968 [2024-11-27 22:02:03.062187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.968 [2024-11-27 22:02:03.062196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:39.968 [2024-11-27 22:02:03.062205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:39.968 [2024-11-27 22:02:03.062219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.968 [2024-11-27 22:02:03.062250] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:39.968 [2024-11-27 22:02:03.062265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.968 [2024-11-27 22:02:03.062274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:39.968 [2024-11-27 22:02:03.062282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:29:39.968 [2024-11-27 22:02:03.062292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.968 [2024-11-27 22:02:03.062376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.968 [2024-11-27 22:02:03.062387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:39.969 [2024-11-27 22:02:03.062397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.062 ms 00:29:39.969 [2024-11-27 22:02:03.062409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.969 [2024-11-27 22:02:03.063548] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1308.161 ms, result 0 00:29:39.969 [2024-11-27 22:02:03.079210] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:40.230 [2024-11-27 22:02:03.095213] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:40.230 [2024-11-27 22:02:03.103365] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:40.799 Validate MD5 checksum, iteration 1 00:29:40.799 22:02:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:40.799 22:02:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:40.799 22:02:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:40.799 22:02:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:40.799 22:02:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:29:40.799 22:02:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:40.799 22:02:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:40.799 22:02:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:40.799 22:02:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:40.799 22:02:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:40.799 22:02:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:40.799 22:02:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:40.799 22:02:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:40.799 22:02:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:40.799 22:02:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:40.799 [2024-11-27 22:02:03.790312] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:29:40.799 [2024-11-27 22:02:03.790569] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94106 ] 00:29:41.058 [2024-11-27 22:02:03.934534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:41.058 [2024-11-27 22:02:03.953197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:42.437  [2024-11-27T22:02:05.817Z] Copying: 814/1024 [MB] (814 MBps) [2024-11-27T22:02:06.388Z] Copying: 1024/1024 [MB] (average 789 MBps) 00:29:43.267 00:29:43.267 22:02:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:43.267 22:02:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:45.799 22:02:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:45.799 22:02:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=5bb1228973e4682d5ab243faca2641ff 00:29:45.799 22:02:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 5bb1228973e4682d5ab243faca2641ff != \5\b\b\1\2\2\8\9\7\3\e\4\6\8\2\d\5\a\b\2\4\3\f\a\c\a\2\6\4\1\f\f ]] 00:29:45.800 Validate MD5 checksum, iteration 2 00:29:45.800 22:02:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:45.800 22:02:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:45.800 22:02:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:45.800 22:02:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:45.800 22:02:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:45.800 22:02:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:45.800 22:02:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:45.800 22:02:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:45.800 22:02:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:45.800 [2024-11-27 22:02:08.496451] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:29:45.800 [2024-11-27 22:02:08.497156] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94156 ] 00:29:45.800 [2024-11-27 22:02:08.655364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:45.800 [2024-11-27 22:02:08.673945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:47.181  [2024-11-27T22:02:10.868Z] Copying: 653/1024 [MB] (653 MBps) [2024-11-27T22:02:11.130Z] Copying: 1024/1024 [MB] (average 654 MBps) 00:29:48.009 00:29:48.009 22:02:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:29:48.009 22:02:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:50.552 22:02:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:50.552 22:02:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=545bc764b1a63652ea7d86c98c529238 00:29:50.552 22:02:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 545bc764b1a63652ea7d86c98c529238 != \5\4\5\b\c\7\6\4\b\1\a\6\3\6\5\2\e\a\7\d\8\6\c\9\8\c\5\2\9\2\3\8 ]] 00:29:50.552 22:02:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:50.552 22:02:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:50.552 22:02:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:29:50.552 22:02:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:29:50.552 22:02:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:29:50.552 22:02:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:50.552 22:02:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:29:50.552 22:02:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:29:50.552 22:02:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:29:50.552 22:02:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:29:50.552 22:02:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 94071 ]] 00:29:50.552 22:02:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 94071 00:29:50.552 22:02:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 94071 ']' 00:29:50.552 22:02:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 94071 00:29:50.552 22:02:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:50.552 22:02:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:50.552 22:02:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94071 00:29:50.552 killing process with pid 94071 00:29:50.552 22:02:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:50.553 22:02:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:50.553 22:02:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94071' 00:29:50.553 22:02:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 94071 00:29:50.553 22:02:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 94071 00:29:50.553 [2024-11-27 22:02:13.403544] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:29:50.553 [2024-11-27 22:02:13.407624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.553 [2024-11-27 22:02:13.407729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:50.553 [2024-11-27 22:02:13.407782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:50.553 [2024-11-27 22:02:13.407801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.553 [2024-11-27 22:02:13.407832] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:50.553 [2024-11-27 22:02:13.408518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.553 [2024-11-27 22:02:13.408592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:50.553 [2024-11-27 22:02:13.408638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.450 ms 00:29:50.553 [2024-11-27 22:02:13.408656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.553 [2024-11-27 22:02:13.408857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.553 [2024-11-27 22:02:13.408877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:50.553 [2024-11-27 22:02:13.408892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.174 ms 00:29:50.553 [2024-11-27 22:02:13.408930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.553 [2024-11-27 22:02:13.410005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.553 [2024-11-27 22:02:13.410030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:50.553 [2024-11-27 22:02:13.410037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.059 ms 00:29:50.553 [2024-11-27 22:02:13.410046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.553 [2024-11-27 22:02:13.410910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.553 [2024-11-27 22:02:13.410927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:50.553 [2024-11-27 22:02:13.410934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.842 ms 00:29:50.553 [2024-11-27 22:02:13.410941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.553 [2024-11-27 22:02:13.412268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.553 [2024-11-27 22:02:13.412296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:50.553 [2024-11-27 22:02:13.412307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.296 ms 00:29:50.553 [2024-11-27 22:02:13.412313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.553 [2024-11-27 22:02:13.413373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.553 [2024-11-27 22:02:13.413397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:50.553 [2024-11-27 22:02:13.413405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.018 ms 00:29:50.553 [2024-11-27 22:02:13.413410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.553 [2024-11-27 22:02:13.413485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.553 [2024-11-27 22:02:13.413493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:50.553 [2024-11-27 22:02:13.413499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:29:50.553 [2024-11-27 22:02:13.413508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.553 [2024-11-27 22:02:13.414718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.553 [2024-11-27 22:02:13.414750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:29:50.553 [2024-11-27 22:02:13.414757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.194 ms 00:29:50.553 [2024-11-27 22:02:13.414762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.553 [2024-11-27 22:02:13.415831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.553 [2024-11-27 22:02:13.415857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:29:50.553 [2024-11-27 22:02:13.415864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.045 ms 00:29:50.553 [2024-11-27 22:02:13.415869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.553 [2024-11-27 22:02:13.416989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.553 [2024-11-27 22:02:13.417015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:50.553 [2024-11-27 22:02:13.417022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.096 ms 00:29:50.553 [2024-11-27 22:02:13.417028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.553 [2024-11-27 22:02:13.418010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.553 [2024-11-27 22:02:13.418096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:50.553 [2024-11-27 22:02:13.418107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.936 ms 00:29:50.553 [2024-11-27 22:02:13.418113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.553 [2024-11-27 22:02:13.418135] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:50.553 [2024-11-27 22:02:13.418146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:50.553 [2024-11-27 22:02:13.418159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:50.553 [2024-11-27 22:02:13.418166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:50.553 [2024-11-27 22:02:13.418172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:50.553 [2024-11-27 22:02:13.418179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:50.553 [2024-11-27 22:02:13.418185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:50.553 [2024-11-27 22:02:13.418190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:50.553 [2024-11-27 22:02:13.418196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:50.553 [2024-11-27 22:02:13.418203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:50.553 [2024-11-27 22:02:13.418209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:50.553 [2024-11-27 22:02:13.418215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:50.553 [2024-11-27 22:02:13.418221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:50.553 [2024-11-27 22:02:13.418227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:50.553 [2024-11-27 22:02:13.418233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:50.553 [2024-11-27 22:02:13.418239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:50.553 [2024-11-27 22:02:13.418245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:50.553 [2024-11-27 22:02:13.418251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:50.553 [2024-11-27 22:02:13.418257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:50.553 [2024-11-27 22:02:13.418264] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:50.553 [2024-11-27 22:02:13.418272] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 9ceb4166-f435-44b7-9a91-79e405710158 00:29:50.553 [2024-11-27 22:02:13.418278] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:50.553 [2024-11-27 22:02:13.418284] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:29:50.553 [2024-11-27 22:02:13.418289] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:29:50.553 [2024-11-27 22:02:13.418295] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:29:50.553 [2024-11-27 22:02:13.418300] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:50.553 [2024-11-27 22:02:13.418306] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:50.553 [2024-11-27 22:02:13.418314] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:50.553 [2024-11-27 22:02:13.418319] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:50.553 [2024-11-27 22:02:13.418324] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:50.553 [2024-11-27 22:02:13.418329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.553 [2024-11-27 22:02:13.418349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:50.553 [2024-11-27 22:02:13.418356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.195 ms 00:29:50.553 [2024-11-27 22:02:13.418362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.553 [2024-11-27 22:02:13.419566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.553 [2024-11-27 22:02:13.419591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:50.553 [2024-11-27 22:02:13.419599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.190 ms 00:29:50.553 [2024-11-27 22:02:13.419611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.553 [2024-11-27 22:02:13.419681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:50.553 [2024-11-27 22:02:13.419688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:50.553 [2024-11-27 22:02:13.419694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:29:50.553 [2024-11-27 22:02:13.419700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.553 [2024-11-27 22:02:13.424110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.553 [2024-11-27 22:02:13.424136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:50.553 [2024-11-27 22:02:13.424143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.553 [2024-11-27 22:02:13.424150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.553 [2024-11-27 22:02:13.424173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.553 [2024-11-27 22:02:13.424179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:50.553 [2024-11-27 22:02:13.424186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.553 [2024-11-27 22:02:13.424191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.553 [2024-11-27 22:02:13.424246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.553 [2024-11-27 22:02:13.424254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:50.553 [2024-11-27 22:02:13.424260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.554 [2024-11-27 22:02:13.424265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.554 [2024-11-27 22:02:13.424280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.554 [2024-11-27 22:02:13.424287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:50.554 [2024-11-27 22:02:13.424292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.554 [2024-11-27 22:02:13.424298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.554 [2024-11-27 22:02:13.432097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.554 [2024-11-27 22:02:13.432236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:50.554 [2024-11-27 22:02:13.432247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.554 [2024-11-27 22:02:13.432253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.554 [2024-11-27 22:02:13.438258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.554 [2024-11-27 22:02:13.438287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:50.554 [2024-11-27 22:02:13.438295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.554 [2024-11-27 22:02:13.438302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.554 [2024-11-27 22:02:13.438350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.554 [2024-11-27 22:02:13.438359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:50.554 [2024-11-27 22:02:13.438365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.554 [2024-11-27 22:02:13.438372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.554 [2024-11-27 22:02:13.438415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.554 [2024-11-27 22:02:13.438425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:50.554 [2024-11-27 22:02:13.438431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.554 [2024-11-27 22:02:13.438437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.554 [2024-11-27 22:02:13.438486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.554 [2024-11-27 22:02:13.438494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:50.554 [2024-11-27 22:02:13.438500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.554 [2024-11-27 22:02:13.438506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.554 [2024-11-27 22:02:13.438529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.554 [2024-11-27 22:02:13.438536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:50.554 [2024-11-27 22:02:13.438544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.554 [2024-11-27 22:02:13.438550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.554 [2024-11-27 22:02:13.438578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.554 [2024-11-27 22:02:13.438585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:50.554 [2024-11-27 22:02:13.438591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.554 [2024-11-27 22:02:13.438596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.554 [2024-11-27 22:02:13.438629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:50.554 [2024-11-27 22:02:13.438639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:50.554 [2024-11-27 22:02:13.438644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:50.554 [2024-11-27 22:02:13.438650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:50.554 [2024-11-27 22:02:13.438746] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 31.104 ms, result 0 00:29:50.554 22:02:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:50.554 22:02:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:50.554 22:02:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:29:50.554 22:02:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:29:50.554 22:02:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:29:50.554 22:02:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:50.554 22:02:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:29:50.554 Remove shared memory files 00:29:50.554 22:02:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:50.554 22:02:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:50.554 22:02:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:50.554 22:02:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid93846 00:29:50.554 22:02:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:50.554 22:02:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:50.554 00:29:50.554 real 1m14.581s 00:29:50.554 user 1m39.275s 00:29:50.554 sys 0m19.533s 00:29:50.554 ************************************ 00:29:50.554 END TEST ftl_upgrade_shutdown 00:29:50.554 ************************************ 00:29:50.554 22:02:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:50.554 22:02:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:50.554 22:02:13 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:29:50.554 22:02:13 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:29:50.554 22:02:13 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:29:50.554 22:02:13 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:50.554 22:02:13 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:50.816 ************************************ 00:29:50.816 START TEST ftl_restore_fast 00:29:50.816 ************************************ 00:29:50.816 22:02:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:29:50.816 * Looking for test storage... 00:29:50.816 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:50.816 22:02:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:50.816 22:02:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lcov --version 00:29:50.816 22:02:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:50.816 22:02:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:50.816 22:02:13 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:50.816 22:02:13 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:50.816 22:02:13 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:50.816 22:02:13 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:29:50.816 22:02:13 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:29:50.816 22:02:13 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:29:50.816 22:02:13 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:29:50.816 22:02:13 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:29:50.816 22:02:13 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:29:50.816 22:02:13 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:29:50.816 22:02:13 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:50.816 22:02:13 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:29:50.816 22:02:13 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:29:50.816 22:02:13 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:50.816 22:02:13 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:50.816 22:02:13 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:29:50.816 22:02:13 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:29:50.816 22:02:13 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:50.816 22:02:13 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:29:50.816 22:02:13 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:29:50.816 22:02:13 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:29:50.816 22:02:13 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:29:50.816 22:02:13 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:50.816 22:02:13 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:29:50.816 22:02:13 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:29:50.816 22:02:13 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:50.816 22:02:13 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:50.816 22:02:13 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:29:50.816 22:02:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:50.816 22:02:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:50.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.816 --rc genhtml_branch_coverage=1 00:29:50.816 --rc genhtml_function_coverage=1 00:29:50.816 --rc genhtml_legend=1 00:29:50.816 --rc geninfo_all_blocks=1 00:29:50.816 --rc geninfo_unexecuted_blocks=1 00:29:50.816 00:29:50.816 ' 00:29:50.816 22:02:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:50.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.816 --rc genhtml_branch_coverage=1 00:29:50.816 --rc genhtml_function_coverage=1 00:29:50.816 --rc genhtml_legend=1 00:29:50.816 --rc geninfo_all_blocks=1 00:29:50.816 --rc geninfo_unexecuted_blocks=1 00:29:50.816 00:29:50.816 ' 00:29:50.816 22:02:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:50.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.816 --rc genhtml_branch_coverage=1 00:29:50.816 --rc genhtml_function_coverage=1 00:29:50.816 --rc genhtml_legend=1 00:29:50.816 --rc geninfo_all_blocks=1 00:29:50.816 --rc geninfo_unexecuted_blocks=1 00:29:50.816 00:29:50.816 ' 00:29:50.816 22:02:13 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:50.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.816 --rc genhtml_branch_coverage=1 00:29:50.816 --rc genhtml_function_coverage=1 00:29:50.816 --rc genhtml_legend=1 00:29:50.816 --rc geninfo_all_blocks=1 00:29:50.816 --rc geninfo_unexecuted_blocks=1 00:29:50.816 00:29:50.817 ' 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.9ZVnW8c0Ud 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=94295 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 94295 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 94295 ']' 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:50.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:50.817 22:02:13 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:29:51.077 [2024-11-27 22:02:13.941659] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:29:51.077 [2024-11-27 22:02:13.941807] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94295 ] 00:29:51.077 [2024-11-27 22:02:14.090665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.077 [2024-11-27 22:02:14.113197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.012 22:02:14 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:52.012 22:02:14 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:29:52.012 22:02:14 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:29:52.012 22:02:14 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:29:52.012 22:02:14 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:52.012 22:02:14 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:29:52.012 22:02:14 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:29:52.012 22:02:14 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:29:52.012 22:02:15 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:29:52.012 22:02:15 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:29:52.012 22:02:15 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:29:52.012 22:02:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:29:52.012 22:02:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:52.012 22:02:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:29:52.012 22:02:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:29:52.012 22:02:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:29:52.271 22:02:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:52.271 { 00:29:52.271 "name": "nvme0n1", 00:29:52.271 "aliases": [ 00:29:52.271 "3d28cb52-de76-4099-bec8-fff829635e67" 00:29:52.271 ], 00:29:52.271 "product_name": "NVMe disk", 00:29:52.271 "block_size": 4096, 00:29:52.271 "num_blocks": 1310720, 00:29:52.271 "uuid": "3d28cb52-de76-4099-bec8-fff829635e67", 00:29:52.271 "numa_id": -1, 00:29:52.271 "assigned_rate_limits": { 00:29:52.271 "rw_ios_per_sec": 0, 00:29:52.271 "rw_mbytes_per_sec": 0, 00:29:52.271 "r_mbytes_per_sec": 0, 00:29:52.271 "w_mbytes_per_sec": 0 00:29:52.271 }, 00:29:52.271 "claimed": true, 00:29:52.271 "claim_type": "read_many_write_one", 00:29:52.271 "zoned": false, 00:29:52.271 "supported_io_types": { 00:29:52.271 "read": true, 00:29:52.271 "write": true, 00:29:52.271 "unmap": true, 00:29:52.271 "flush": true, 00:29:52.271 "reset": true, 00:29:52.271 "nvme_admin": true, 00:29:52.271 "nvme_io": true, 00:29:52.271 "nvme_io_md": false, 00:29:52.271 "write_zeroes": true, 00:29:52.271 "zcopy": false, 00:29:52.271 "get_zone_info": false, 00:29:52.271 "zone_management": false, 00:29:52.271 "zone_append": false, 00:29:52.271 "compare": true, 00:29:52.271 "compare_and_write": false, 00:29:52.271 "abort": true, 00:29:52.271 "seek_hole": false, 00:29:52.271 "seek_data": false, 00:29:52.271 "copy": true, 00:29:52.271 "nvme_iov_md": false 00:29:52.271 }, 00:29:52.271 "driver_specific": { 00:29:52.271 "nvme": [ 00:29:52.271 { 00:29:52.271 "pci_address": "0000:00:11.0", 00:29:52.271 "trid": { 00:29:52.271 "trtype": "PCIe", 00:29:52.271 "traddr": "0000:00:11.0" 00:29:52.271 }, 00:29:52.271 "ctrlr_data": { 00:29:52.271 "cntlid": 0, 00:29:52.271 "vendor_id": "0x1b36", 00:29:52.271 "model_number": "QEMU NVMe Ctrl", 00:29:52.271 "serial_number": "12341", 00:29:52.271 "firmware_revision": "8.0.0", 00:29:52.271 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:52.271 "oacs": { 00:29:52.271 "security": 0, 00:29:52.271 "format": 1, 00:29:52.271 "firmware": 0, 00:29:52.271 "ns_manage": 1 00:29:52.271 }, 00:29:52.271 "multi_ctrlr": false, 00:29:52.271 "ana_reporting": false 00:29:52.271 }, 00:29:52.271 "vs": { 00:29:52.271 "nvme_version": "1.4" 00:29:52.271 }, 00:29:52.271 "ns_data": { 00:29:52.271 "id": 1, 00:29:52.271 "can_share": false 00:29:52.271 } 00:29:52.271 } 00:29:52.271 ], 00:29:52.271 "mp_policy": "active_passive" 00:29:52.271 } 00:29:52.271 } 00:29:52.271 ]' 00:29:52.271 22:02:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:52.271 22:02:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:29:52.271 22:02:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:52.271 22:02:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:29:52.271 22:02:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:29:52.271 22:02:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:29:52.271 22:02:15 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:29:52.271 22:02:15 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:29:52.271 22:02:15 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:29:52.271 22:02:15 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:52.271 22:02:15 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:52.540 22:02:15 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=916e6ba0-1a4d-4086-b132-f62df3b11962 00:29:52.540 22:02:15 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:29:52.540 22:02:15 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 916e6ba0-1a4d-4086-b132-f62df3b11962 00:29:52.798 22:02:15 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:29:52.798 22:02:15 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=c7a5cec2-964a-4cc8-8d37-75fe3f1f1d47 00:29:52.798 22:02:15 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c7a5cec2-964a-4cc8-8d37-75fe3f1f1d47 00:29:53.057 22:02:16 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=6e3d0fb7-1a63-4eb6-b2fe-4d237ba600ab 00:29:53.057 22:02:16 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:29:53.057 22:02:16 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 6e3d0fb7-1a63-4eb6-b2fe-4d237ba600ab 00:29:53.057 22:02:16 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:29:53.057 22:02:16 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:53.057 22:02:16 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=6e3d0fb7-1a63-4eb6-b2fe-4d237ba600ab 00:29:53.057 22:02:16 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:29:53.057 22:02:16 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 6e3d0fb7-1a63-4eb6-b2fe-4d237ba600ab 00:29:53.057 22:02:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=6e3d0fb7-1a63-4eb6-b2fe-4d237ba600ab 00:29:53.057 22:02:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:53.057 22:02:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:29:53.057 22:02:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:29:53.057 22:02:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6e3d0fb7-1a63-4eb6-b2fe-4d237ba600ab 00:29:53.316 22:02:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:53.316 { 00:29:53.316 "name": "6e3d0fb7-1a63-4eb6-b2fe-4d237ba600ab", 00:29:53.316 "aliases": [ 00:29:53.316 "lvs/nvme0n1p0" 00:29:53.316 ], 00:29:53.316 "product_name": "Logical Volume", 00:29:53.316 "block_size": 4096, 00:29:53.316 "num_blocks": 26476544, 00:29:53.316 "uuid": "6e3d0fb7-1a63-4eb6-b2fe-4d237ba600ab", 00:29:53.316 "assigned_rate_limits": { 00:29:53.316 "rw_ios_per_sec": 0, 00:29:53.316 "rw_mbytes_per_sec": 0, 00:29:53.316 "r_mbytes_per_sec": 0, 00:29:53.316 "w_mbytes_per_sec": 0 00:29:53.316 }, 00:29:53.316 "claimed": false, 00:29:53.316 "zoned": false, 00:29:53.316 "supported_io_types": { 00:29:53.316 "read": true, 00:29:53.316 "write": true, 00:29:53.316 "unmap": true, 00:29:53.316 "flush": false, 00:29:53.316 "reset": true, 00:29:53.316 "nvme_admin": false, 00:29:53.316 "nvme_io": false, 00:29:53.316 "nvme_io_md": false, 00:29:53.316 "write_zeroes": true, 00:29:53.316 "zcopy": false, 00:29:53.316 "get_zone_info": false, 00:29:53.316 "zone_management": false, 00:29:53.316 "zone_append": false, 00:29:53.316 "compare": false, 00:29:53.316 "compare_and_write": false, 00:29:53.316 "abort": false, 00:29:53.316 "seek_hole": true, 00:29:53.316 "seek_data": true, 00:29:53.316 "copy": false, 00:29:53.316 "nvme_iov_md": false 00:29:53.316 }, 00:29:53.316 "driver_specific": { 00:29:53.316 "lvol": { 00:29:53.316 "lvol_store_uuid": "c7a5cec2-964a-4cc8-8d37-75fe3f1f1d47", 00:29:53.316 "base_bdev": "nvme0n1", 00:29:53.316 "thin_provision": true, 00:29:53.316 "num_allocated_clusters": 0, 00:29:53.316 "snapshot": false, 00:29:53.316 "clone": false, 00:29:53.316 "esnap_clone": false 00:29:53.316 } 00:29:53.316 } 00:29:53.316 } 00:29:53.316 ]' 00:29:53.316 22:02:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:53.316 22:02:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:29:53.316 22:02:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:53.316 22:02:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:53.316 22:02:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:53.316 22:02:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:29:53.316 22:02:16 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:29:53.316 22:02:16 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:29:53.316 22:02:16 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:29:53.575 22:02:16 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:29:53.575 22:02:16 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:29:53.575 22:02:16 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 6e3d0fb7-1a63-4eb6-b2fe-4d237ba600ab 00:29:53.575 22:02:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=6e3d0fb7-1a63-4eb6-b2fe-4d237ba600ab 00:29:53.575 22:02:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:53.575 22:02:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:29:53.575 22:02:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:29:53.575 22:02:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6e3d0fb7-1a63-4eb6-b2fe-4d237ba600ab 00:29:53.834 22:02:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:53.834 { 00:29:53.834 "name": "6e3d0fb7-1a63-4eb6-b2fe-4d237ba600ab", 00:29:53.834 "aliases": [ 00:29:53.834 "lvs/nvme0n1p0" 00:29:53.834 ], 00:29:53.834 "product_name": "Logical Volume", 00:29:53.834 "block_size": 4096, 00:29:53.834 "num_blocks": 26476544, 00:29:53.834 "uuid": "6e3d0fb7-1a63-4eb6-b2fe-4d237ba600ab", 00:29:53.834 "assigned_rate_limits": { 00:29:53.834 "rw_ios_per_sec": 0, 00:29:53.834 "rw_mbytes_per_sec": 0, 00:29:53.834 "r_mbytes_per_sec": 0, 00:29:53.834 "w_mbytes_per_sec": 0 00:29:53.834 }, 00:29:53.834 "claimed": false, 00:29:53.834 "zoned": false, 00:29:53.834 "supported_io_types": { 00:29:53.834 "read": true, 00:29:53.835 "write": true, 00:29:53.835 "unmap": true, 00:29:53.835 "flush": false, 00:29:53.835 "reset": true, 00:29:53.835 "nvme_admin": false, 00:29:53.835 "nvme_io": false, 00:29:53.835 "nvme_io_md": false, 00:29:53.835 "write_zeroes": true, 00:29:53.835 "zcopy": false, 00:29:53.835 "get_zone_info": false, 00:29:53.835 "zone_management": false, 00:29:53.835 "zone_append": false, 00:29:53.835 "compare": false, 00:29:53.835 "compare_and_write": false, 00:29:53.835 "abort": false, 00:29:53.835 "seek_hole": true, 00:29:53.835 "seek_data": true, 00:29:53.835 "copy": false, 00:29:53.835 "nvme_iov_md": false 00:29:53.835 }, 00:29:53.835 "driver_specific": { 00:29:53.835 "lvol": { 00:29:53.835 "lvol_store_uuid": "c7a5cec2-964a-4cc8-8d37-75fe3f1f1d47", 00:29:53.835 "base_bdev": "nvme0n1", 00:29:53.835 "thin_provision": true, 00:29:53.835 "num_allocated_clusters": 0, 00:29:53.835 "snapshot": false, 00:29:53.835 "clone": false, 00:29:53.835 "esnap_clone": false 00:29:53.835 } 00:29:53.835 } 00:29:53.835 } 00:29:53.835 ]' 00:29:53.835 22:02:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:53.835 22:02:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:29:53.835 22:02:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:53.835 22:02:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:53.835 22:02:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:53.835 22:02:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:29:53.835 22:02:16 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:29:53.835 22:02:16 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:29:54.093 22:02:17 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:29:54.093 22:02:17 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 6e3d0fb7-1a63-4eb6-b2fe-4d237ba600ab 00:29:54.093 22:02:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=6e3d0fb7-1a63-4eb6-b2fe-4d237ba600ab 00:29:54.093 22:02:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:54.093 22:02:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:29:54.093 22:02:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:29:54.093 22:02:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6e3d0fb7-1a63-4eb6-b2fe-4d237ba600ab 00:29:54.353 22:02:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:54.353 { 00:29:54.353 "name": "6e3d0fb7-1a63-4eb6-b2fe-4d237ba600ab", 00:29:54.353 "aliases": [ 00:29:54.353 "lvs/nvme0n1p0" 00:29:54.353 ], 00:29:54.353 "product_name": "Logical Volume", 00:29:54.353 "block_size": 4096, 00:29:54.353 "num_blocks": 26476544, 00:29:54.353 "uuid": "6e3d0fb7-1a63-4eb6-b2fe-4d237ba600ab", 00:29:54.353 "assigned_rate_limits": { 00:29:54.353 "rw_ios_per_sec": 0, 00:29:54.353 "rw_mbytes_per_sec": 0, 00:29:54.353 "r_mbytes_per_sec": 0, 00:29:54.353 "w_mbytes_per_sec": 0 00:29:54.353 }, 00:29:54.353 "claimed": false, 00:29:54.353 "zoned": false, 00:29:54.353 "supported_io_types": { 00:29:54.353 "read": true, 00:29:54.353 "write": true, 00:29:54.353 "unmap": true, 00:29:54.353 "flush": false, 00:29:54.353 "reset": true, 00:29:54.353 "nvme_admin": false, 00:29:54.353 "nvme_io": false, 00:29:54.353 "nvme_io_md": false, 00:29:54.353 "write_zeroes": true, 00:29:54.353 "zcopy": false, 00:29:54.353 "get_zone_info": false, 00:29:54.353 "zone_management": false, 00:29:54.353 "zone_append": false, 00:29:54.353 "compare": false, 00:29:54.353 "compare_and_write": false, 00:29:54.353 "abort": false, 00:29:54.353 "seek_hole": true, 00:29:54.353 "seek_data": true, 00:29:54.353 "copy": false, 00:29:54.353 "nvme_iov_md": false 00:29:54.353 }, 00:29:54.353 "driver_specific": { 00:29:54.353 "lvol": { 00:29:54.353 "lvol_store_uuid": "c7a5cec2-964a-4cc8-8d37-75fe3f1f1d47", 00:29:54.353 "base_bdev": "nvme0n1", 00:29:54.353 "thin_provision": true, 00:29:54.353 "num_allocated_clusters": 0, 00:29:54.353 "snapshot": false, 00:29:54.353 "clone": false, 00:29:54.353 "esnap_clone": false 00:29:54.353 } 00:29:54.353 } 00:29:54.353 } 00:29:54.353 ]' 00:29:54.353 22:02:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:54.353 22:02:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:29:54.353 22:02:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:54.353 22:02:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:54.353 22:02:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:54.353 22:02:17 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:29:54.353 22:02:17 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:29:54.353 22:02:17 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 6e3d0fb7-1a63-4eb6-b2fe-4d237ba600ab --l2p_dram_limit 10' 00:29:54.353 22:02:17 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:29:54.353 22:02:17 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:29:54.353 22:02:17 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:29:54.353 22:02:17 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:29:54.353 22:02:17 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:29:54.353 22:02:17 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 6e3d0fb7-1a63-4eb6-b2fe-4d237ba600ab --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:29:54.614 [2024-11-27 22:02:17.555106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.614 [2024-11-27 22:02:17.555146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:54.614 [2024-11-27 22:02:17.555158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:54.614 [2024-11-27 22:02:17.555165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.614 [2024-11-27 22:02:17.555208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.614 [2024-11-27 22:02:17.555219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:54.614 [2024-11-27 22:02:17.555225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:29:54.614 [2024-11-27 22:02:17.555233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.614 [2024-11-27 22:02:17.555247] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:54.614 [2024-11-27 22:02:17.555468] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:54.614 [2024-11-27 22:02:17.555490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.614 [2024-11-27 22:02:17.555498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:54.614 [2024-11-27 22:02:17.555504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:29:54.614 [2024-11-27 22:02:17.555512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.614 [2024-11-27 22:02:17.555535] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 97042294-04eb-4fc0-af05-f54f26f34808 00:29:54.614 [2024-11-27 22:02:17.556516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.614 [2024-11-27 22:02:17.556541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:29:54.614 [2024-11-27 22:02:17.556551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:29:54.614 [2024-11-27 22:02:17.556556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.614 [2024-11-27 22:02:17.561213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.614 [2024-11-27 22:02:17.561238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:54.614 [2024-11-27 22:02:17.561247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.596 ms 00:29:54.614 [2024-11-27 22:02:17.561253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.614 [2024-11-27 22:02:17.561313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.614 [2024-11-27 22:02:17.561320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:54.614 [2024-11-27 22:02:17.561328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:29:54.614 [2024-11-27 22:02:17.561342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.614 [2024-11-27 22:02:17.561382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.614 [2024-11-27 22:02:17.561392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:54.614 [2024-11-27 22:02:17.561400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:54.614 [2024-11-27 22:02:17.561412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.614 [2024-11-27 22:02:17.561430] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:54.614 [2024-11-27 22:02:17.562635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.614 [2024-11-27 22:02:17.562740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:54.614 [2024-11-27 22:02:17.562752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.211 ms 00:29:54.614 [2024-11-27 22:02:17.562760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.614 [2024-11-27 22:02:17.562786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.614 [2024-11-27 22:02:17.562794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:54.614 [2024-11-27 22:02:17.562801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:54.614 [2024-11-27 22:02:17.562809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.614 [2024-11-27 22:02:17.562823] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:29:54.614 [2024-11-27 22:02:17.562938] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:54.614 [2024-11-27 22:02:17.562947] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:54.614 [2024-11-27 22:02:17.562957] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:54.614 [2024-11-27 22:02:17.562964] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:54.614 [2024-11-27 22:02:17.562974] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:54.614 [2024-11-27 22:02:17.562980] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:54.614 [2024-11-27 22:02:17.562990] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:54.614 [2024-11-27 22:02:17.562996] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:54.614 [2024-11-27 22:02:17.563002] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:54.615 [2024-11-27 22:02:17.563008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.615 [2024-11-27 22:02:17.563014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:54.615 [2024-11-27 22:02:17.563020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:29:54.615 [2024-11-27 22:02:17.563027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.615 [2024-11-27 22:02:17.563091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.615 [2024-11-27 22:02:17.563100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:54.615 [2024-11-27 22:02:17.563106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:29:54.615 [2024-11-27 22:02:17.563114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.615 [2024-11-27 22:02:17.563185] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:54.615 [2024-11-27 22:02:17.563194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:54.615 [2024-11-27 22:02:17.563200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:54.615 [2024-11-27 22:02:17.563207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:54.615 [2024-11-27 22:02:17.563212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:54.615 [2024-11-27 22:02:17.563218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:54.615 [2024-11-27 22:02:17.563223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:54.615 [2024-11-27 22:02:17.563229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:54.615 [2024-11-27 22:02:17.563235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:54.615 [2024-11-27 22:02:17.563241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:54.615 [2024-11-27 22:02:17.563246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:54.615 [2024-11-27 22:02:17.563252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:54.615 [2024-11-27 22:02:17.563258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:54.615 [2024-11-27 22:02:17.563269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:54.615 [2024-11-27 22:02:17.563274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:54.615 [2024-11-27 22:02:17.563281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:54.615 [2024-11-27 22:02:17.563286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:54.615 [2024-11-27 22:02:17.563292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:54.615 [2024-11-27 22:02:17.563297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:54.615 [2024-11-27 22:02:17.563303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:54.615 [2024-11-27 22:02:17.563308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:54.615 [2024-11-27 22:02:17.563314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:54.615 [2024-11-27 22:02:17.563319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:54.615 [2024-11-27 22:02:17.563326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:54.615 [2024-11-27 22:02:17.563332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:54.615 [2024-11-27 22:02:17.563349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:54.615 [2024-11-27 22:02:17.563355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:54.615 [2024-11-27 22:02:17.563362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:54.615 [2024-11-27 22:02:17.563368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:54.615 [2024-11-27 22:02:17.563376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:54.615 [2024-11-27 22:02:17.563382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:54.615 [2024-11-27 22:02:17.563389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:54.615 [2024-11-27 22:02:17.563395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:54.615 [2024-11-27 22:02:17.563402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:54.615 [2024-11-27 22:02:17.563408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:54.615 [2024-11-27 22:02:17.563415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:54.615 [2024-11-27 22:02:17.563420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:54.615 [2024-11-27 22:02:17.563427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:54.615 [2024-11-27 22:02:17.563433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:54.615 [2024-11-27 22:02:17.563442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:54.615 [2024-11-27 22:02:17.563448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:54.615 [2024-11-27 22:02:17.563455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:54.615 [2024-11-27 22:02:17.563460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:54.615 [2024-11-27 22:02:17.563467] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:54.615 [2024-11-27 22:02:17.563477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:54.615 [2024-11-27 22:02:17.563487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:54.615 [2024-11-27 22:02:17.563494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:54.615 [2024-11-27 22:02:17.563507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:54.615 [2024-11-27 22:02:17.563513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:54.615 [2024-11-27 22:02:17.563519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:54.615 [2024-11-27 22:02:17.563526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:54.615 [2024-11-27 22:02:17.563533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:54.615 [2024-11-27 22:02:17.563538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:54.615 [2024-11-27 22:02:17.563548] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:54.615 [2024-11-27 22:02:17.563556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:54.615 [2024-11-27 22:02:17.563564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:54.615 [2024-11-27 22:02:17.563570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:54.615 [2024-11-27 22:02:17.563578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:54.615 [2024-11-27 22:02:17.563584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:54.615 [2024-11-27 22:02:17.563592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:54.615 [2024-11-27 22:02:17.563599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:54.615 [2024-11-27 22:02:17.563608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:54.615 [2024-11-27 22:02:17.563614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:54.615 [2024-11-27 22:02:17.563621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:54.615 [2024-11-27 22:02:17.563627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:54.615 [2024-11-27 22:02:17.563635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:54.615 [2024-11-27 22:02:17.563641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:54.615 [2024-11-27 22:02:17.563648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:54.615 [2024-11-27 22:02:17.563654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:54.615 [2024-11-27 22:02:17.563662] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:54.615 [2024-11-27 22:02:17.563669] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:54.615 [2024-11-27 22:02:17.563678] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:54.615 [2024-11-27 22:02:17.563684] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:54.615 [2024-11-27 22:02:17.563691] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:54.615 [2024-11-27 22:02:17.563698] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:54.615 [2024-11-27 22:02:17.563705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.615 [2024-11-27 22:02:17.563712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:54.615 [2024-11-27 22:02:17.563723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:29:54.615 [2024-11-27 22:02:17.563729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.615 [2024-11-27 22:02:17.563757] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:29:54.615 [2024-11-27 22:02:17.563764] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:29:57.149 [2024-11-27 22:02:19.972624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.149 [2024-11-27 22:02:19.972766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:29:57.149 [2024-11-27 22:02:19.972831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2408.854 ms 00:29:57.149 [2024-11-27 22:02:19.972852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.149 [2024-11-27 22:02:19.980176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.149 [2024-11-27 22:02:19.980290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:57.149 [2024-11-27 22:02:19.980352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.243 ms 00:29:57.149 [2024-11-27 22:02:19.980371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.149 [2024-11-27 22:02:19.980459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.149 [2024-11-27 22:02:19.980579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:57.149 [2024-11-27 22:02:19.980601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:29:57.149 [2024-11-27 22:02:19.980616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.149 [2024-11-27 22:02:19.987877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.149 [2024-11-27 22:02:19.987984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:57.149 [2024-11-27 22:02:19.988039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.211 ms 00:29:57.149 [2024-11-27 22:02:19.988059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.149 [2024-11-27 22:02:19.988091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.149 [2024-11-27 22:02:19.988135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:57.149 [2024-11-27 22:02:19.988154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:29:57.149 [2024-11-27 22:02:19.988169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.149 [2024-11-27 22:02:19.988503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.149 [2024-11-27 22:02:19.988585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:57.149 [2024-11-27 22:02:19.988632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:29:57.149 [2024-11-27 22:02:19.988649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.149 [2024-11-27 22:02:19.988747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.149 [2024-11-27 22:02:19.988800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:57.149 [2024-11-27 22:02:19.988868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:29:57.149 [2024-11-27 22:02:19.988884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.149 [2024-11-27 22:02:19.993584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.149 [2024-11-27 22:02:19.993666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:57.149 [2024-11-27 22:02:19.993708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.674 ms 00:29:57.149 [2024-11-27 22:02:19.993726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.149 [2024-11-27 22:02:20.012321] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:57.149 [2024-11-27 22:02:20.015142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.149 [2024-11-27 22:02:20.015251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:57.149 [2024-11-27 22:02:20.015309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.349 ms 00:29:57.149 [2024-11-27 22:02:20.015352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.149 [2024-11-27 22:02:20.055365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.149 [2024-11-27 22:02:20.055476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:29:57.149 [2024-11-27 22:02:20.055520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.885 ms 00:29:57.149 [2024-11-27 22:02:20.055542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.149 [2024-11-27 22:02:20.055695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.149 [2024-11-27 22:02:20.055805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:57.149 [2024-11-27 22:02:20.055835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:29:57.149 [2024-11-27 22:02:20.055852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.149 [2024-11-27 22:02:20.058266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.149 [2024-11-27 22:02:20.058373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:29:57.149 [2024-11-27 22:02:20.058423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.370 ms 00:29:57.149 [2024-11-27 22:02:20.058446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.149 [2024-11-27 22:02:20.060362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.149 [2024-11-27 22:02:20.060446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:29:57.149 [2024-11-27 22:02:20.060497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.881 ms 00:29:57.149 [2024-11-27 22:02:20.060515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.149 [2024-11-27 22:02:20.060760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.149 [2024-11-27 22:02:20.060902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:57.149 [2024-11-27 22:02:20.060944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:29:57.149 [2024-11-27 22:02:20.060964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.149 [2024-11-27 22:02:20.084554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.149 [2024-11-27 22:02:20.084651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:29:57.149 [2024-11-27 22:02:20.084709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.563 ms 00:29:57.149 [2024-11-27 22:02:20.084729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.149 [2024-11-27 22:02:20.088041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.149 [2024-11-27 22:02:20.088130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:29:57.149 [2024-11-27 22:02:20.088176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.271 ms 00:29:57.149 [2024-11-27 22:02:20.088197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.149 [2024-11-27 22:02:20.090751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.149 [2024-11-27 22:02:20.090838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:29:57.149 [2024-11-27 22:02:20.090883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.516 ms 00:29:57.149 [2024-11-27 22:02:20.090901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.149 [2024-11-27 22:02:20.093432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.149 [2024-11-27 22:02:20.093522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:57.149 [2024-11-27 22:02:20.093565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.499 ms 00:29:57.149 [2024-11-27 22:02:20.093587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.149 [2024-11-27 22:02:20.093624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.149 [2024-11-27 22:02:20.093667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:57.149 [2024-11-27 22:02:20.093685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:57.149 [2024-11-27 22:02:20.093702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.149 [2024-11-27 22:02:20.093782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.149 [2024-11-27 22:02:20.093804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:57.149 [2024-11-27 22:02:20.093905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:29:57.149 [2024-11-27 22:02:20.093926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.149 [2024-11-27 22:02:20.094686] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2539.230 ms, result 0 00:29:57.149 { 00:29:57.149 "name": "ftl0", 00:29:57.149 "uuid": "97042294-04eb-4fc0-af05-f54f26f34808" 00:29:57.149 } 00:29:57.149 22:02:20 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:29:57.149 22:02:20 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:29:57.408 22:02:20 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:29:57.408 22:02:20 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:29:57.408 [2024-11-27 22:02:20.501356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.408 [2024-11-27 22:02:20.501466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:57.408 [2024-11-27 22:02:20.501485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:29:57.408 [2024-11-27 22:02:20.501492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.408 [2024-11-27 22:02:20.501515] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:57.408 [2024-11-27 22:02:20.501893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.408 [2024-11-27 22:02:20.501916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:57.408 [2024-11-27 22:02:20.501923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:29:57.408 [2024-11-27 22:02:20.501934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.408 [2024-11-27 22:02:20.502127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.408 [2024-11-27 22:02:20.502137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:57.408 [2024-11-27 22:02:20.502147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:29:57.408 [2024-11-27 22:02:20.502157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.408 [2024-11-27 22:02:20.504575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.408 [2024-11-27 22:02:20.504656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:57.408 [2024-11-27 22:02:20.504670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.406 ms 00:29:57.408 [2024-11-27 22:02:20.504677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.408 [2024-11-27 22:02:20.509284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.408 [2024-11-27 22:02:20.509309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:57.408 [2024-11-27 22:02:20.509317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.591 ms 00:29:57.408 [2024-11-27 22:02:20.509327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.408 [2024-11-27 22:02:20.510643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.408 [2024-11-27 22:02:20.510675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:57.408 [2024-11-27 22:02:20.510682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.261 ms 00:29:57.408 [2024-11-27 22:02:20.510689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.408 [2024-11-27 22:02:20.514273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.408 [2024-11-27 22:02:20.514305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:57.408 [2024-11-27 22:02:20.514313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.558 ms 00:29:57.408 [2024-11-27 22:02:20.514320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.408 [2024-11-27 22:02:20.514423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.408 [2024-11-27 22:02:20.514434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:57.408 [2024-11-27 22:02:20.514440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:29:57.408 [2024-11-27 22:02:20.514449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.408 [2024-11-27 22:02:20.515714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.408 [2024-11-27 22:02:20.515808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:57.408 [2024-11-27 22:02:20.515819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.251 ms 00:29:57.408 [2024-11-27 22:02:20.515826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.408 [2024-11-27 22:02:20.517034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.408 [2024-11-27 22:02:20.517065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:57.408 [2024-11-27 22:02:20.517072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.183 ms 00:29:57.408 [2024-11-27 22:02:20.517078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.408 [2024-11-27 22:02:20.518120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.408 [2024-11-27 22:02:20.518149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:57.408 [2024-11-27 22:02:20.518156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.011 ms 00:29:57.408 [2024-11-27 22:02:20.518163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.408 [2024-11-27 22:02:20.519078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.408 [2024-11-27 22:02:20.519167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:57.408 [2024-11-27 22:02:20.519178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.871 ms 00:29:57.408 [2024-11-27 22:02:20.519185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.408 [2024-11-27 22:02:20.519207] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:57.408 [2024-11-27 22:02:20.519221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:57.408 [2024-11-27 22:02:20.519229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:57.408 [2024-11-27 22:02:20.519237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:57.409 [2024-11-27 22:02:20.519832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:57.410 [2024-11-27 22:02:20.519838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:57.410 [2024-11-27 22:02:20.519844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:57.410 [2024-11-27 22:02:20.519850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:57.410 [2024-11-27 22:02:20.519857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:57.410 [2024-11-27 22:02:20.519863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:57.410 [2024-11-27 22:02:20.519870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:57.410 [2024-11-27 22:02:20.519876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:57.410 [2024-11-27 22:02:20.519883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:57.410 [2024-11-27 22:02:20.519889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:57.410 [2024-11-27 22:02:20.519904] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:57.410 [2024-11-27 22:02:20.519910] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 97042294-04eb-4fc0-af05-f54f26f34808 00:29:57.410 [2024-11-27 22:02:20.519918] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:57.410 [2024-11-27 22:02:20.519923] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:57.410 [2024-11-27 22:02:20.519930] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:57.410 [2024-11-27 22:02:20.519935] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:57.410 [2024-11-27 22:02:20.519945] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:57.410 [2024-11-27 22:02:20.519951] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:57.410 [2024-11-27 22:02:20.519958] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:57.410 [2024-11-27 22:02:20.519963] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:57.410 [2024-11-27 22:02:20.519969] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:57.410 [2024-11-27 22:02:20.519975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.410 [2024-11-27 22:02:20.519982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:57.410 [2024-11-27 22:02:20.519988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.768 ms 00:29:57.410 [2024-11-27 22:02:20.519995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.410 [2024-11-27 22:02:20.521246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.410 [2024-11-27 22:02:20.521263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:57.410 [2024-11-27 22:02:20.521272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.238 ms 00:29:57.410 [2024-11-27 22:02:20.521280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.410 [2024-11-27 22:02:20.521376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.410 [2024-11-27 22:02:20.521386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:57.410 [2024-11-27 22:02:20.521392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:29:57.410 [2024-11-27 22:02:20.521399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.410 [2024-11-27 22:02:20.526114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.410 [2024-11-27 22:02:20.526204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:57.410 [2024-11-27 22:02:20.526247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.410 [2024-11-27 22:02:20.526265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.410 [2024-11-27 22:02:20.526317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.410 [2024-11-27 22:02:20.526420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:57.410 [2024-11-27 22:02:20.526440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.410 [2024-11-27 22:02:20.526456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.410 [2024-11-27 22:02:20.526506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.410 [2024-11-27 22:02:20.526531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:57.410 [2024-11-27 22:02:20.526550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.410 [2024-11-27 22:02:20.526625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.410 [2024-11-27 22:02:20.526652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.410 [2024-11-27 22:02:20.526669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:57.410 [2024-11-27 22:02:20.526684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.410 [2024-11-27 22:02:20.526700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.671 [2024-11-27 22:02:20.534637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.671 [2024-11-27 22:02:20.534749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:57.671 [2024-11-27 22:02:20.534797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.671 [2024-11-27 22:02:20.534817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.671 [2024-11-27 22:02:20.541348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.671 [2024-11-27 22:02:20.541457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:57.671 [2024-11-27 22:02:20.541500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.671 [2024-11-27 22:02:20.541519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.671 [2024-11-27 22:02:20.541581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.671 [2024-11-27 22:02:20.541664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:57.671 [2024-11-27 22:02:20.541682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.671 [2024-11-27 22:02:20.541699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.671 [2024-11-27 22:02:20.541741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.671 [2024-11-27 22:02:20.541831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:57.671 [2024-11-27 22:02:20.541849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.671 [2024-11-27 22:02:20.541865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.671 [2024-11-27 22:02:20.541930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.671 [2024-11-27 22:02:20.542000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:57.671 [2024-11-27 22:02:20.542018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.671 [2024-11-27 22:02:20.542034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.671 [2024-11-27 22:02:20.542077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.671 [2024-11-27 22:02:20.542169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:57.671 [2024-11-27 22:02:20.542187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.671 [2024-11-27 22:02:20.542204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.671 [2024-11-27 22:02:20.542243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.671 [2024-11-27 22:02:20.542315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:57.671 [2024-11-27 22:02:20.542343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.671 [2024-11-27 22:02:20.542360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.671 [2024-11-27 22:02:20.542409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.671 [2024-11-27 22:02:20.542463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:57.671 [2024-11-27 22:02:20.542482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.671 [2024-11-27 22:02:20.542498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.671 [2024-11-27 22:02:20.542609] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 41.236 ms, result 0 00:29:57.671 true 00:29:57.671 22:02:20 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 94295 00:29:57.671 22:02:20 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 94295 ']' 00:29:57.671 22:02:20 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 94295 00:29:57.671 22:02:20 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:29:57.671 22:02:20 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:57.671 22:02:20 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94295 00:29:57.671 22:02:20 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:57.671 22:02:20 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:57.671 killing process with pid 94295 00:29:57.671 22:02:20 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94295' 00:29:57.671 22:02:20 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 94295 00:29:57.671 22:02:20 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 94295 00:30:02.960 22:02:25 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:30:06.387 262144+0 records in 00:30:06.387 262144+0 records out 00:30:06.387 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.72031 s, 289 MB/s 00:30:06.387 22:02:29 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:08.933 22:02:31 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:08.933 [2024-11-27 22:02:31.540688] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:30:08.933 [2024-11-27 22:02:31.540796] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94481 ] 00:30:08.933 [2024-11-27 22:02:31.676561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:08.933 [2024-11-27 22:02:31.693401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:08.933 [2024-11-27 22:02:31.774786] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:08.933 [2024-11-27 22:02:31.774951] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:08.933 [2024-11-27 22:02:31.916751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.933 [2024-11-27 22:02:31.916789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:08.933 [2024-11-27 22:02:31.916802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:08.933 [2024-11-27 22:02:31.916809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.933 [2024-11-27 22:02:31.916841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.933 [2024-11-27 22:02:31.916851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:08.933 [2024-11-27 22:02:31.916857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:30:08.933 [2024-11-27 22:02:31.916867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.933 [2024-11-27 22:02:31.916883] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:08.933 [2024-11-27 22:02:31.917060] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:08.933 [2024-11-27 22:02:31.917071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.933 [2024-11-27 22:02:31.917078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:08.933 [2024-11-27 22:02:31.917088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:30:08.933 [2024-11-27 22:02:31.917096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.933 [2024-11-27 22:02:31.918000] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:08.933 [2024-11-27 22:02:31.919836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.933 [2024-11-27 22:02:31.919862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:08.933 [2024-11-27 22:02:31.919874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.837 ms 00:30:08.933 [2024-11-27 22:02:31.919883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.933 [2024-11-27 22:02:31.919923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.933 [2024-11-27 22:02:31.919934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:08.933 [2024-11-27 22:02:31.919943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:30:08.933 [2024-11-27 22:02:31.919948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.933 [2024-11-27 22:02:31.924167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.933 [2024-11-27 22:02:31.924191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:08.933 [2024-11-27 22:02:31.924201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.192 ms 00:30:08.933 [2024-11-27 22:02:31.924212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.933 [2024-11-27 22:02:31.924270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.933 [2024-11-27 22:02:31.924277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:08.933 [2024-11-27 22:02:31.924285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:30:08.933 [2024-11-27 22:02:31.924291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.933 [2024-11-27 22:02:31.924327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.933 [2024-11-27 22:02:31.924344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:08.933 [2024-11-27 22:02:31.924350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:08.933 [2024-11-27 22:02:31.924359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.933 [2024-11-27 22:02:31.924373] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:08.933 [2024-11-27 22:02:31.925506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.933 [2024-11-27 22:02:31.925529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:08.933 [2024-11-27 22:02:31.925539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.137 ms 00:30:08.933 [2024-11-27 22:02:31.925544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.933 [2024-11-27 22:02:31.925569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.933 [2024-11-27 22:02:31.925575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:08.933 [2024-11-27 22:02:31.925581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:08.933 [2024-11-27 22:02:31.925589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.933 [2024-11-27 22:02:31.925608] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:08.933 [2024-11-27 22:02:31.925624] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:08.933 [2024-11-27 22:02:31.925653] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:08.933 [2024-11-27 22:02:31.925665] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:08.933 [2024-11-27 22:02:31.925743] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:08.933 [2024-11-27 22:02:31.925751] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:08.933 [2024-11-27 22:02:31.925763] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:08.933 [2024-11-27 22:02:31.925771] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:08.933 [2024-11-27 22:02:31.925778] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:08.933 [2024-11-27 22:02:31.925784] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:08.933 [2024-11-27 22:02:31.925789] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:08.933 [2024-11-27 22:02:31.925795] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:08.933 [2024-11-27 22:02:31.925801] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:08.933 [2024-11-27 22:02:31.925806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.933 [2024-11-27 22:02:31.925812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:08.933 [2024-11-27 22:02:31.925818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:30:08.933 [2024-11-27 22:02:31.925823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.933 [2024-11-27 22:02:31.925889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.933 [2024-11-27 22:02:31.925896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:08.933 [2024-11-27 22:02:31.925901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:30:08.933 [2024-11-27 22:02:31.925907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.933 [2024-11-27 22:02:31.925980] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:08.933 [2024-11-27 22:02:31.925987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:08.933 [2024-11-27 22:02:31.925993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:08.933 [2024-11-27 22:02:31.925999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:08.933 [2024-11-27 22:02:31.926005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:08.933 [2024-11-27 22:02:31.926010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:08.933 [2024-11-27 22:02:31.926015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:08.933 [2024-11-27 22:02:31.926021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:08.933 [2024-11-27 22:02:31.926027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:08.933 [2024-11-27 22:02:31.926032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:08.933 [2024-11-27 22:02:31.926037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:08.933 [2024-11-27 22:02:31.926044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:08.933 [2024-11-27 22:02:31.926051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:08.933 [2024-11-27 22:02:31.926056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:08.933 [2024-11-27 22:02:31.926061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:08.933 [2024-11-27 22:02:31.926066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:08.933 [2024-11-27 22:02:31.926072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:08.933 [2024-11-27 22:02:31.926077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:08.933 [2024-11-27 22:02:31.926081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:08.933 [2024-11-27 22:02:31.926086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:08.933 [2024-11-27 22:02:31.926092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:08.933 [2024-11-27 22:02:31.926096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:08.933 [2024-11-27 22:02:31.926101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:08.933 [2024-11-27 22:02:31.926106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:08.933 [2024-11-27 22:02:31.926111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:08.933 [2024-11-27 22:02:31.926116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:08.933 [2024-11-27 22:02:31.926121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:08.933 [2024-11-27 22:02:31.926126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:08.933 [2024-11-27 22:02:31.926133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:08.933 [2024-11-27 22:02:31.926138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:08.934 [2024-11-27 22:02:31.926143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:08.934 [2024-11-27 22:02:31.926148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:08.934 [2024-11-27 22:02:31.926154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:08.934 [2024-11-27 22:02:31.926159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:08.934 [2024-11-27 22:02:31.926165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:08.934 [2024-11-27 22:02:31.926171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:08.934 [2024-11-27 22:02:31.926176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:08.934 [2024-11-27 22:02:31.926182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:08.934 [2024-11-27 22:02:31.926187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:08.934 [2024-11-27 22:02:31.926193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:08.934 [2024-11-27 22:02:31.926199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:08.934 [2024-11-27 22:02:31.926204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:08.934 [2024-11-27 22:02:31.926211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:08.934 [2024-11-27 22:02:31.926217] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:08.934 [2024-11-27 22:02:31.926228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:08.934 [2024-11-27 22:02:31.926234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:08.934 [2024-11-27 22:02:31.926240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:08.934 [2024-11-27 22:02:31.926247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:08.934 [2024-11-27 22:02:31.926253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:08.934 [2024-11-27 22:02:31.926259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:08.934 [2024-11-27 22:02:31.926264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:08.934 [2024-11-27 22:02:31.926270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:08.934 [2024-11-27 22:02:31.926276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:08.934 [2024-11-27 22:02:31.926283] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:08.934 [2024-11-27 22:02:31.926294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:08.934 [2024-11-27 22:02:31.926300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:08.934 [2024-11-27 22:02:31.926307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:08.934 [2024-11-27 22:02:31.926313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:08.934 [2024-11-27 22:02:31.926319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:08.934 [2024-11-27 22:02:31.926324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:08.934 [2024-11-27 22:02:31.926332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:08.934 [2024-11-27 22:02:31.926354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:08.934 [2024-11-27 22:02:31.926360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:08.934 [2024-11-27 22:02:31.926367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:08.934 [2024-11-27 22:02:31.926377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:08.934 [2024-11-27 22:02:31.926384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:08.934 [2024-11-27 22:02:31.926390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:08.934 [2024-11-27 22:02:31.926396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:08.934 [2024-11-27 22:02:31.926402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:08.934 [2024-11-27 22:02:31.926408] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:08.934 [2024-11-27 22:02:31.926415] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:08.934 [2024-11-27 22:02:31.926423] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:08.934 [2024-11-27 22:02:31.926429] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:08.934 [2024-11-27 22:02:31.926436] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:08.934 [2024-11-27 22:02:31.926442] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:08.934 [2024-11-27 22:02:31.926449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.934 [2024-11-27 22:02:31.926458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:08.934 [2024-11-27 22:02:31.926466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.521 ms 00:30:08.934 [2024-11-27 22:02:31.926474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.934 [2024-11-27 22:02:31.933955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.934 [2024-11-27 22:02:31.933979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:08.934 [2024-11-27 22:02:31.933989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.449 ms 00:30:08.934 [2024-11-27 22:02:31.933995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.934 [2024-11-27 22:02:31.934054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.934 [2024-11-27 22:02:31.934061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:08.934 [2024-11-27 22:02:31.934067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:30:08.934 [2024-11-27 22:02:31.934073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.934 [2024-11-27 22:02:31.950669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.934 [2024-11-27 22:02:31.950714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:08.934 [2024-11-27 22:02:31.950729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.570 ms 00:30:08.934 [2024-11-27 22:02:31.950738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.934 [2024-11-27 22:02:31.950773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.934 [2024-11-27 22:02:31.950785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:08.934 [2024-11-27 22:02:31.950795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:08.934 [2024-11-27 22:02:31.950804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.934 [2024-11-27 22:02:31.951178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.934 [2024-11-27 22:02:31.951211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:08.934 [2024-11-27 22:02:31.951223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:30:08.934 [2024-11-27 22:02:31.951232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.934 [2024-11-27 22:02:31.951404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.934 [2024-11-27 22:02:31.951427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:08.934 [2024-11-27 22:02:31.951439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:30:08.934 [2024-11-27 22:02:31.951449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.934 [2024-11-27 22:02:31.956657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.934 [2024-11-27 22:02:31.956859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:08.934 [2024-11-27 22:02:31.956878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.184 ms 00:30:08.934 [2024-11-27 22:02:31.956889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.934 [2024-11-27 22:02:31.958822] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:08.934 [2024-11-27 22:02:31.958850] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:08.934 [2024-11-27 22:02:31.958859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.934 [2024-11-27 22:02:31.958867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:08.934 [2024-11-27 22:02:31.958874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.876 ms 00:30:08.934 [2024-11-27 22:02:31.958879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.934 [2024-11-27 22:02:31.969959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.934 [2024-11-27 22:02:31.969989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:08.934 [2024-11-27 22:02:31.969998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.052 ms 00:30:08.934 [2024-11-27 22:02:31.970005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.934 [2024-11-27 22:02:31.971431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.934 [2024-11-27 22:02:31.971523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:08.934 [2024-11-27 22:02:31.971533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.407 ms 00:30:08.934 [2024-11-27 22:02:31.971539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.934 [2024-11-27 22:02:31.972787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.934 [2024-11-27 22:02:31.972811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:08.934 [2024-11-27 22:02:31.972819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.228 ms 00:30:08.934 [2024-11-27 22:02:31.972825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.934 [2024-11-27 22:02:31.973056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.934 [2024-11-27 22:02:31.973071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:08.934 [2024-11-27 22:02:31.973079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:30:08.934 [2024-11-27 22:02:31.973084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.934 [2024-11-27 22:02:31.986205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.934 [2024-11-27 22:02:31.986240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:08.935 [2024-11-27 22:02:31.986248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.109 ms 00:30:08.935 [2024-11-27 22:02:31.986254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.935 [2024-11-27 22:02:31.991910] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:08.935 [2024-11-27 22:02:31.993800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.935 [2024-11-27 22:02:31.993824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:08.935 [2024-11-27 22:02:31.993836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.517 ms 00:30:08.935 [2024-11-27 22:02:31.993848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.935 [2024-11-27 22:02:31.993887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.935 [2024-11-27 22:02:31.993895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:08.935 [2024-11-27 22:02:31.993904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:08.935 [2024-11-27 22:02:31.993917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.935 [2024-11-27 22:02:31.993968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.935 [2024-11-27 22:02:31.993976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:08.935 [2024-11-27 22:02:31.993983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:30:08.935 [2024-11-27 22:02:31.993992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.935 [2024-11-27 22:02:31.994008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.935 [2024-11-27 22:02:31.994015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:08.935 [2024-11-27 22:02:31.994022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:08.935 [2024-11-27 22:02:31.994030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.935 [2024-11-27 22:02:31.994056] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:08.935 [2024-11-27 22:02:31.994064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.935 [2024-11-27 22:02:31.994071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:08.935 [2024-11-27 22:02:31.994078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:08.935 [2024-11-27 22:02:31.994086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.935 [2024-11-27 22:02:31.996694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.935 [2024-11-27 22:02:31.996719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:08.935 [2024-11-27 22:02:31.996727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.593 ms 00:30:08.935 [2024-11-27 22:02:31.996732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.935 [2024-11-27 22:02:31.996791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.935 [2024-11-27 22:02:31.996805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:08.935 [2024-11-27 22:02:31.996814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:30:08.935 [2024-11-27 22:02:31.996822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.935 [2024-11-27 22:02:31.997595] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 80.525 ms, result 0 00:30:10.318  [2024-11-27T22:02:34.009Z] Copying: 25/1024 [MB] (25 MBps) [2024-11-27T22:02:35.394Z] Copying: 44/1024 [MB] (19 MBps) [2024-11-27T22:02:36.332Z] Copying: 68/1024 [MB] (24 MBps) [2024-11-27T22:02:37.276Z] Copying: 107/1024 [MB] (39 MBps) [2024-11-27T22:02:38.216Z] Copying: 136/1024 [MB] (28 MBps) [2024-11-27T22:02:39.163Z] Copying: 153/1024 [MB] (17 MBps) [2024-11-27T22:02:40.099Z] Copying: 165/1024 [MB] (12 MBps) [2024-11-27T22:02:41.030Z] Copying: 185/1024 [MB] (20 MBps) [2024-11-27T22:02:42.400Z] Copying: 213/1024 [MB] (27 MBps) [2024-11-27T22:02:43.342Z] Copying: 240/1024 [MB] (27 MBps) [2024-11-27T22:02:44.286Z] Copying: 266/1024 [MB] (25 MBps) [2024-11-27T22:02:45.226Z] Copying: 283/1024 [MB] (17 MBps) [2024-11-27T22:02:46.163Z] Copying: 298/1024 [MB] (15 MBps) [2024-11-27T22:02:47.107Z] Copying: 332/1024 [MB] (33 MBps) [2024-11-27T22:02:48.051Z] Copying: 348/1024 [MB] (16 MBps) [2024-11-27T22:02:49.434Z] Copying: 366/1024 [MB] (17 MBps) [2024-11-27T22:02:50.367Z] Copying: 385/1024 [MB] (18 MBps) [2024-11-27T22:02:51.302Z] Copying: 411/1024 [MB] (26 MBps) [2024-11-27T22:02:52.235Z] Copying: 444/1024 [MB] (32 MBps) [2024-11-27T22:02:53.178Z] Copying: 471/1024 [MB] (27 MBps) [2024-11-27T22:02:54.122Z] Copying: 494/1024 [MB] (22 MBps) [2024-11-27T22:02:55.066Z] Copying: 512/1024 [MB] (17 MBps) [2024-11-27T22:02:56.010Z] Copying: 529/1024 [MB] (16 MBps) [2024-11-27T22:02:57.392Z] Copying: 546/1024 [MB] (17 MBps) [2024-11-27T22:02:58.325Z] Copying: 562/1024 [MB] (15 MBps) [2024-11-27T22:02:59.255Z] Copying: 586/1024 [MB] (23 MBps) [2024-11-27T22:03:00.194Z] Copying: 611/1024 [MB] (25 MBps) [2024-11-27T22:03:01.129Z] Copying: 629/1024 [MB] (17 MBps) [2024-11-27T22:03:02.062Z] Copying: 652/1024 [MB] (22 MBps) [2024-11-27T22:03:03.443Z] Copying: 681/1024 [MB] (29 MBps) [2024-11-27T22:03:04.015Z] Copying: 721/1024 [MB] (39 MBps) [2024-11-27T22:03:05.399Z] Copying: 738/1024 [MB] (17 MBps) [2024-11-27T22:03:06.345Z] Copying: 764/1024 [MB] (25 MBps) [2024-11-27T22:03:07.290Z] Copying: 788/1024 [MB] (24 MBps) [2024-11-27T22:03:08.236Z] Copying: 801/1024 [MB] (13 MBps) [2024-11-27T22:03:09.181Z] Copying: 813/1024 [MB] (11 MBps) [2024-11-27T22:03:10.123Z] Copying: 835/1024 [MB] (22 MBps) [2024-11-27T22:03:11.070Z] Copying: 852/1024 [MB] (17 MBps) [2024-11-27T22:03:12.016Z] Copying: 863/1024 [MB] (10 MBps) [2024-11-27T22:03:13.407Z] Copying: 873/1024 [MB] (10 MBps) [2024-11-27T22:03:14.354Z] Copying: 884/1024 [MB] (10 MBps) [2024-11-27T22:03:15.300Z] Copying: 895/1024 [MB] (10 MBps) [2024-11-27T22:03:16.246Z] Copying: 907/1024 [MB] (12 MBps) [2024-11-27T22:03:17.189Z] Copying: 920/1024 [MB] (12 MBps) [2024-11-27T22:03:18.132Z] Copying: 937/1024 [MB] (16 MBps) [2024-11-27T22:03:19.142Z] Copying: 958/1024 [MB] (21 MBps) [2024-11-27T22:03:20.158Z] Copying: 984/1024 [MB] (25 MBps) [2024-11-27T22:03:20.732Z] Copying: 1005/1024 [MB] (21 MBps) [2024-11-27T22:03:20.732Z] Copying: 1024/1024 [MB] (average 21 MBps)[2024-11-27 22:03:20.712322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.611 [2024-11-27 22:03:20.712441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:57.611 [2024-11-27 22:03:20.712493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:57.611 [2024-11-27 22:03:20.712518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.611 [2024-11-27 22:03:20.712615] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:57.611 [2024-11-27 22:03:20.713158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.611 [2024-11-27 22:03:20.713248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:57.611 [2024-11-27 22:03:20.713292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.504 ms 00:30:57.611 [2024-11-27 22:03:20.713309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.611 [2024-11-27 22:03:20.714922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.611 [2024-11-27 22:03:20.715014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:57.611 [2024-11-27 22:03:20.715063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.573 ms 00:30:57.611 [2024-11-27 22:03:20.715090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.611 [2024-11-27 22:03:20.715132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.611 [2024-11-27 22:03:20.715181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:57.611 [2024-11-27 22:03:20.715199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:57.611 [2024-11-27 22:03:20.715213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.611 [2024-11-27 22:03:20.715289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.611 [2024-11-27 22:03:20.715309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:57.611 [2024-11-27 22:03:20.715324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:30:57.611 [2024-11-27 22:03:20.715354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.611 [2024-11-27 22:03:20.715475] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:57.611 [2024-11-27 22:03:20.715502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.715526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.715548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.715570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.715594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.715666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.715689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.715711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.715733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.715756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.715778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.715829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.715853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.715874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.715896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.715919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.715941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.715991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.716013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.716035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.716058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.716080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.716130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.716153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.716174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.716195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.716218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.716262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.716288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.716310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.716332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.716366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.716409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.716434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.716456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.716478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.716501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.716523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.716567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.716590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.716612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.716628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.716634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.716640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.716646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.716652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.716659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.716665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.716671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.716676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.716682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.716688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.716694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.716700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.716705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:57.611 [2024-11-27 22:03:20.716711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:57.612 [2024-11-27 22:03:20.716716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:57.612 [2024-11-27 22:03:20.716722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:57.612 [2024-11-27 22:03:20.716736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:57.612 [2024-11-27 22:03:20.716742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:57.612 [2024-11-27 22:03:20.716748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:57.612 [2024-11-27 22:03:20.716754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:57.612 [2024-11-27 22:03:20.716761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:57.612 [2024-11-27 22:03:20.716766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:57.612 [2024-11-27 22:03:20.716772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:57.612 [2024-11-27 22:03:20.716777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:57.612 [2024-11-27 22:03:20.716783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:57.612 [2024-11-27 22:03:20.716790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:57.612 [2024-11-27 22:03:20.716796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:57.612 [2024-11-27 22:03:20.716802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:57.612 [2024-11-27 22:03:20.716807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:57.612 [2024-11-27 22:03:20.716822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:57.612 [2024-11-27 22:03:20.716828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:57.612 [2024-11-27 22:03:20.716835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:57.612 [2024-11-27 22:03:20.716841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:57.612 [2024-11-27 22:03:20.716847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:57.612 [2024-11-27 22:03:20.716852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:57.612 [2024-11-27 22:03:20.716858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:57.612 [2024-11-27 22:03:20.716864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:57.612 [2024-11-27 22:03:20.716870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:57.612 [2024-11-27 22:03:20.716875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:57.612 [2024-11-27 22:03:20.716881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:57.612 [2024-11-27 22:03:20.716887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:57.612 [2024-11-27 22:03:20.716894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:57.612 [2024-11-27 22:03:20.716900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:57.612 [2024-11-27 22:03:20.716905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:57.612 [2024-11-27 22:03:20.716911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:57.612 [2024-11-27 22:03:20.716916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:57.612 [2024-11-27 22:03:20.716922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:57.612 [2024-11-27 22:03:20.716927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:57.612 [2024-11-27 22:03:20.716933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:57.612 [2024-11-27 22:03:20.716938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:57.612 [2024-11-27 22:03:20.716944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:57.612 [2024-11-27 22:03:20.716950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:57.612 [2024-11-27 22:03:20.716956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:57.612 [2024-11-27 22:03:20.716962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:57.612 [2024-11-27 22:03:20.716968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:57.612 [2024-11-27 22:03:20.716973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:57.612 [2024-11-27 22:03:20.716978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:57.612 [2024-11-27 22:03:20.716985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:57.612 [2024-11-27 22:03:20.717001] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:57.612 [2024-11-27 22:03:20.717007] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 97042294-04eb-4fc0-af05-f54f26f34808 00:30:57.612 [2024-11-27 22:03:20.717017] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:57.612 [2024-11-27 22:03:20.717022] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:30:57.612 [2024-11-27 22:03:20.717029] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:57.612 [2024-11-27 22:03:20.717036] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:57.612 [2024-11-27 22:03:20.717041] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:57.612 [2024-11-27 22:03:20.717049] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:57.612 [2024-11-27 22:03:20.717055] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:57.612 [2024-11-27 22:03:20.717060] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:57.612 [2024-11-27 22:03:20.717065] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:57.612 [2024-11-27 22:03:20.717071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.612 [2024-11-27 22:03:20.717076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:57.612 [2024-11-27 22:03:20.717087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.597 ms 00:30:57.612 [2024-11-27 22:03:20.717093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.612 [2024-11-27 22:03:20.718721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.612 [2024-11-27 22:03:20.718813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:57.612 [2024-11-27 22:03:20.718824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.615 ms 00:30:57.612 [2024-11-27 22:03:20.718835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.612 [2024-11-27 22:03:20.718917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:57.612 [2024-11-27 22:03:20.718930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:57.612 [2024-11-27 22:03:20.718937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:30:57.612 [2024-11-27 22:03:20.718945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.612 [2024-11-27 22:03:20.724411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.612 [2024-11-27 22:03:20.724432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:57.612 [2024-11-27 22:03:20.724440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.612 [2024-11-27 22:03:20.724447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.612 [2024-11-27 22:03:20.724492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.612 [2024-11-27 22:03:20.724504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:57.612 [2024-11-27 22:03:20.724511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.612 [2024-11-27 22:03:20.724520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.612 [2024-11-27 22:03:20.724544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.612 [2024-11-27 22:03:20.724551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:57.612 [2024-11-27 22:03:20.724558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.612 [2024-11-27 22:03:20.724564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.612 [2024-11-27 22:03:20.724575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.612 [2024-11-27 22:03:20.724582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:57.612 [2024-11-27 22:03:20.724590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.612 [2024-11-27 22:03:20.724597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.873 [2024-11-27 22:03:20.734858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.873 [2024-11-27 22:03:20.734888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:57.873 [2024-11-27 22:03:20.734897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.873 [2024-11-27 22:03:20.734903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.873 [2024-11-27 22:03:20.743142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.873 [2024-11-27 22:03:20.743181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:57.873 [2024-11-27 22:03:20.743195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.873 [2024-11-27 22:03:20.743202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.873 [2024-11-27 22:03:20.743243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.873 [2024-11-27 22:03:20.743250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:57.873 [2024-11-27 22:03:20.743257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.873 [2024-11-27 22:03:20.743263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.873 [2024-11-27 22:03:20.743284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.873 [2024-11-27 22:03:20.743291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:57.873 [2024-11-27 22:03:20.743297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.873 [2024-11-27 22:03:20.743306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.873 [2024-11-27 22:03:20.743375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.873 [2024-11-27 22:03:20.743384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:57.873 [2024-11-27 22:03:20.743390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.873 [2024-11-27 22:03:20.743396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.873 [2024-11-27 22:03:20.743416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.873 [2024-11-27 22:03:20.743423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:57.873 [2024-11-27 22:03:20.743456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.873 [2024-11-27 22:03:20.743463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.873 [2024-11-27 22:03:20.743501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.873 [2024-11-27 22:03:20.743509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:57.873 [2024-11-27 22:03:20.743515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.873 [2024-11-27 22:03:20.743521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.874 [2024-11-27 22:03:20.743562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:57.874 [2024-11-27 22:03:20.743571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:57.874 [2024-11-27 22:03:20.743578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:57.874 [2024-11-27 22:03:20.743588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:57.874 [2024-11-27 22:03:20.743695] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 31.344 ms, result 0 00:30:58.446 00:30:58.446 00:30:58.446 22:03:21 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:30:58.446 [2024-11-27 22:03:21.433813] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:30:58.446 [2024-11-27 22:03:21.433931] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94988 ] 00:30:58.707 [2024-11-27 22:03:21.575046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:58.707 [2024-11-27 22:03:21.597817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:58.707 [2024-11-27 22:03:21.697994] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:58.707 [2024-11-27 22:03:21.698058] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:58.970 [2024-11-27 22:03:21.852661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.970 [2024-11-27 22:03:21.852812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:58.970 [2024-11-27 22:03:21.852830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:58.970 [2024-11-27 22:03:21.852837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.970 [2024-11-27 22:03:21.852881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.970 [2024-11-27 22:03:21.852889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:58.970 [2024-11-27 22:03:21.852896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:30:58.970 [2024-11-27 22:03:21.852908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.970 [2024-11-27 22:03:21.852929] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:58.970 [2024-11-27 22:03:21.853106] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:58.970 [2024-11-27 22:03:21.853119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.970 [2024-11-27 22:03:21.853126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:58.970 [2024-11-27 22:03:21.853137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:30:58.970 [2024-11-27 22:03:21.853144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.970 [2024-11-27 22:03:21.853327] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:30:58.970 [2024-11-27 22:03:21.853360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.970 [2024-11-27 22:03:21.853366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:58.970 [2024-11-27 22:03:21.853374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:30:58.970 [2024-11-27 22:03:21.853384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.970 [2024-11-27 22:03:21.853426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.970 [2024-11-27 22:03:21.853435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:58.970 [2024-11-27 22:03:21.853442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:30:58.970 [2024-11-27 22:03:21.853450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.970 [2024-11-27 22:03:21.853662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.970 [2024-11-27 22:03:21.853673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:58.970 [2024-11-27 22:03:21.853680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:30:58.970 [2024-11-27 22:03:21.853687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.970 [2024-11-27 22:03:21.853750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.970 [2024-11-27 22:03:21.853763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:58.970 [2024-11-27 22:03:21.853771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:30:58.970 [2024-11-27 22:03:21.853777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.970 [2024-11-27 22:03:21.853793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.970 [2024-11-27 22:03:21.853800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:58.970 [2024-11-27 22:03:21.853807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:58.970 [2024-11-27 22:03:21.853816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.970 [2024-11-27 22:03:21.853829] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:58.970 [2024-11-27 22:03:21.855430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.970 [2024-11-27 22:03:21.855449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:58.970 [2024-11-27 22:03:21.855457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.603 ms 00:30:58.970 [2024-11-27 22:03:21.855463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.970 [2024-11-27 22:03:21.855490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.970 [2024-11-27 22:03:21.855499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:58.970 [2024-11-27 22:03:21.855505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:58.970 [2024-11-27 22:03:21.855515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.970 [2024-11-27 22:03:21.855531] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:58.970 [2024-11-27 22:03:21.855549] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:58.970 [2024-11-27 22:03:21.855582] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:58.970 [2024-11-27 22:03:21.855598] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:58.970 [2024-11-27 22:03:21.855680] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:58.970 [2024-11-27 22:03:21.855688] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:58.970 [2024-11-27 22:03:21.855697] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:58.970 [2024-11-27 22:03:21.855705] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:58.970 [2024-11-27 22:03:21.855715] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:58.970 [2024-11-27 22:03:21.855725] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:58.970 [2024-11-27 22:03:21.855732] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:58.970 [2024-11-27 22:03:21.855738] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:58.970 [2024-11-27 22:03:21.855744] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:58.970 [2024-11-27 22:03:21.855749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.970 [2024-11-27 22:03:21.855755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:58.970 [2024-11-27 22:03:21.855761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:30:58.970 [2024-11-27 22:03:21.855767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.970 [2024-11-27 22:03:21.855831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.970 [2024-11-27 22:03:21.855838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:58.970 [2024-11-27 22:03:21.855845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:30:58.970 [2024-11-27 22:03:21.855851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.970 [2024-11-27 22:03:21.855932] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:58.970 [2024-11-27 22:03:21.855940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:58.970 [2024-11-27 22:03:21.855950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:58.970 [2024-11-27 22:03:21.855957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:58.970 [2024-11-27 22:03:21.855962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:58.970 [2024-11-27 22:03:21.855968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:58.970 [2024-11-27 22:03:21.855974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:58.970 [2024-11-27 22:03:21.855980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:58.970 [2024-11-27 22:03:21.855986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:58.970 [2024-11-27 22:03:21.855991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:58.970 [2024-11-27 22:03:21.855998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:58.970 [2024-11-27 22:03:21.856003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:58.970 [2024-11-27 22:03:21.856008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:58.970 [2024-11-27 22:03:21.856013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:58.970 [2024-11-27 22:03:21.856018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:58.970 [2024-11-27 22:03:21.856023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:58.970 [2024-11-27 22:03:21.856028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:58.970 [2024-11-27 22:03:21.856033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:58.970 [2024-11-27 22:03:21.856040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:58.970 [2024-11-27 22:03:21.856045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:58.970 [2024-11-27 22:03:21.856050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:58.970 [2024-11-27 22:03:21.856056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:58.970 [2024-11-27 22:03:21.856061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:58.970 [2024-11-27 22:03:21.856066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:58.970 [2024-11-27 22:03:21.856071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:58.970 [2024-11-27 22:03:21.856076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:58.970 [2024-11-27 22:03:21.856082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:58.971 [2024-11-27 22:03:21.856088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:58.971 [2024-11-27 22:03:21.856094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:58.971 [2024-11-27 22:03:21.856100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:58.971 [2024-11-27 22:03:21.856105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:58.971 [2024-11-27 22:03:21.856111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:58.971 [2024-11-27 22:03:21.856117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:58.971 [2024-11-27 22:03:21.856123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:58.971 [2024-11-27 22:03:21.856132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:58.971 [2024-11-27 22:03:21.856138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:58.971 [2024-11-27 22:03:21.856144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:58.971 [2024-11-27 22:03:21.856150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:58.971 [2024-11-27 22:03:21.856155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:58.971 [2024-11-27 22:03:21.856161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:58.971 [2024-11-27 22:03:21.856167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:58.971 [2024-11-27 22:03:21.856172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:58.971 [2024-11-27 22:03:21.856182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:58.971 [2024-11-27 22:03:21.856188] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:58.971 [2024-11-27 22:03:21.856194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:58.971 [2024-11-27 22:03:21.856203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:58.971 [2024-11-27 22:03:21.856212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:58.971 [2024-11-27 22:03:21.856219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:58.971 [2024-11-27 22:03:21.856225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:58.971 [2024-11-27 22:03:21.856231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:58.971 [2024-11-27 22:03:21.856238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:58.971 [2024-11-27 22:03:21.856244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:58.971 [2024-11-27 22:03:21.856250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:58.971 [2024-11-27 22:03:21.856257] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:58.971 [2024-11-27 22:03:21.856265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:58.971 [2024-11-27 22:03:21.856274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:58.971 [2024-11-27 22:03:21.856280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:58.971 [2024-11-27 22:03:21.856287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:58.971 [2024-11-27 22:03:21.856293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:58.971 [2024-11-27 22:03:21.856299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:58.971 [2024-11-27 22:03:21.856306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:58.971 [2024-11-27 22:03:21.856312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:58.971 [2024-11-27 22:03:21.856318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:58.971 [2024-11-27 22:03:21.856324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:58.971 [2024-11-27 22:03:21.856330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:58.971 [2024-11-27 22:03:21.856510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:58.971 [2024-11-27 22:03:21.856554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:58.971 [2024-11-27 22:03:21.856579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:58.971 [2024-11-27 22:03:21.856602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:58.971 [2024-11-27 22:03:21.856624] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:58.971 [2024-11-27 22:03:21.856646] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:58.971 [2024-11-27 22:03:21.856670] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:58.971 [2024-11-27 22:03:21.856845] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:58.971 [2024-11-27 22:03:21.856871] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:58.971 [2024-11-27 22:03:21.856894] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:58.971 [2024-11-27 22:03:21.856917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.971 [2024-11-27 22:03:21.856932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:58.971 [2024-11-27 22:03:21.856948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.039 ms 00:30:58.971 [2024-11-27 22:03:21.856962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.971 [2024-11-27 22:03:21.864581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.971 [2024-11-27 22:03:21.864674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:58.971 [2024-11-27 22:03:21.864716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.556 ms 00:30:58.971 [2024-11-27 22:03:21.864743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.971 [2024-11-27 22:03:21.864815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.971 [2024-11-27 22:03:21.864833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:58.971 [2024-11-27 22:03:21.864853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:30:58.971 [2024-11-27 22:03:21.864868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.971 [2024-11-27 22:03:21.884768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.971 [2024-11-27 22:03:21.884895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:58.971 [2024-11-27 22:03:21.884952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.854 ms 00:30:58.971 [2024-11-27 22:03:21.884982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.971 [2024-11-27 22:03:21.885037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.971 [2024-11-27 22:03:21.885062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:58.971 [2024-11-27 22:03:21.885083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:58.971 [2024-11-27 22:03:21.885107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.971 [2024-11-27 22:03:21.885209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.971 [2024-11-27 22:03:21.885269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:58.971 [2024-11-27 22:03:21.885294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:30:58.971 [2024-11-27 22:03:21.885318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.971 [2024-11-27 22:03:21.885508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.971 [2024-11-27 22:03:21.885545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:58.971 [2024-11-27 22:03:21.885561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:30:58.971 [2024-11-27 22:03:21.885571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.971 [2024-11-27 22:03:21.892578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.971 [2024-11-27 22:03:21.892686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:58.971 [2024-11-27 22:03:21.892771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.984 ms 00:30:58.971 [2024-11-27 22:03:21.892801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.971 [2024-11-27 22:03:21.892936] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:58.971 [2024-11-27 22:03:21.892994] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:58.971 [2024-11-27 22:03:21.893167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.971 [2024-11-27 22:03:21.893195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:58.971 [2024-11-27 22:03:21.893221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:30:58.971 [2024-11-27 22:03:21.893250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.971 [2024-11-27 22:03:21.904125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.971 [2024-11-27 22:03:21.904202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:58.971 [2024-11-27 22:03:21.904241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.834 ms 00:30:58.971 [2024-11-27 22:03:21.904261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.971 [2024-11-27 22:03:21.904374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.971 [2024-11-27 22:03:21.904394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:58.971 [2024-11-27 22:03:21.904410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:30:58.971 [2024-11-27 22:03:21.904427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.971 [2024-11-27 22:03:21.904469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.971 [2024-11-27 22:03:21.904490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:58.971 [2024-11-27 22:03:21.904505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:30:58.971 [2024-11-27 22:03:21.904547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.971 [2024-11-27 22:03:21.904813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.971 [2024-11-27 22:03:21.904839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:58.971 [2024-11-27 22:03:21.904884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:30:58.972 [2024-11-27 22:03:21.904904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.972 [2024-11-27 22:03:21.904927] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:30:58.972 [2024-11-27 22:03:21.905052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.972 [2024-11-27 22:03:21.905077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:58.972 [2024-11-27 22:03:21.905092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:30:58.972 [2024-11-27 22:03:21.905129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.972 [2024-11-27 22:03:21.912148] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:58.972 [2024-11-27 22:03:21.912322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.972 [2024-11-27 22:03:21.912356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:58.972 [2024-11-27 22:03:21.912422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.164 ms 00:30:58.972 [2024-11-27 22:03:21.912446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.972 [2024-11-27 22:03:21.914222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.972 [2024-11-27 22:03:21.914292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:58.972 [2024-11-27 22:03:21.914330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.746 ms 00:30:58.972 [2024-11-27 22:03:21.914359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.972 [2024-11-27 22:03:21.914432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.972 [2024-11-27 22:03:21.914456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:58.972 [2024-11-27 22:03:21.914472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:30:58.972 [2024-11-27 22:03:21.914491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.972 [2024-11-27 22:03:21.914531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.972 [2024-11-27 22:03:21.914552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:58.972 [2024-11-27 22:03:21.914568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:58.972 [2024-11-27 22:03:21.914612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.972 [2024-11-27 22:03:21.914663] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:58.972 [2024-11-27 22:03:21.914684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.972 [2024-11-27 22:03:21.914698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:58.972 [2024-11-27 22:03:21.914714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:30:58.972 [2024-11-27 22:03:21.914729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.972 [2024-11-27 22:03:21.919054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.972 [2024-11-27 22:03:21.919140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:58.972 [2024-11-27 22:03:21.919179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.297 ms 00:30:58.972 [2024-11-27 22:03:21.919196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.972 [2024-11-27 22:03:21.919260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:58.972 [2024-11-27 22:03:21.919280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:58.972 [2024-11-27 22:03:21.919297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:30:58.972 [2024-11-27 22:03:21.919312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:58.972 [2024-11-27 22:03:21.920364] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 67.364 ms, result 0 00:31:00.362  [2024-11-27T22:03:24.427Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-27T22:03:25.378Z] Copying: 43/1024 [MB] (20 MBps) [2024-11-27T22:03:26.334Z] Copying: 61/1024 [MB] (18 MBps) [2024-11-27T22:03:27.280Z] Copying: 82/1024 [MB] (20 MBps) [2024-11-27T22:03:28.225Z] Copying: 97/1024 [MB] (15 MBps) [2024-11-27T22:03:29.170Z] Copying: 115/1024 [MB] (17 MBps) [2024-11-27T22:03:30.115Z] Copying: 125/1024 [MB] (10 MBps) [2024-11-27T22:03:31.063Z] Copying: 136/1024 [MB] (10 MBps) [2024-11-27T22:03:32.448Z] Copying: 146/1024 [MB] (10 MBps) [2024-11-27T22:03:33.392Z] Copying: 159/1024 [MB] (12 MBps) [2024-11-27T22:03:34.336Z] Copying: 180/1024 [MB] (21 MBps) [2024-11-27T22:03:35.279Z] Copying: 203/1024 [MB] (22 MBps) [2024-11-27T22:03:36.224Z] Copying: 220/1024 [MB] (17 MBps) [2024-11-27T22:03:37.166Z] Copying: 243/1024 [MB] (22 MBps) [2024-11-27T22:03:38.111Z] Copying: 264/1024 [MB] (20 MBps) [2024-11-27T22:03:39.497Z] Copying: 283/1024 [MB] (19 MBps) [2024-11-27T22:03:40.069Z] Copying: 297/1024 [MB] (13 MBps) [2024-11-27T22:03:41.457Z] Copying: 314/1024 [MB] (17 MBps) [2024-11-27T22:03:42.406Z] Copying: 334/1024 [MB] (19 MBps) [2024-11-27T22:03:43.381Z] Copying: 354/1024 [MB] (20 MBps) [2024-11-27T22:03:44.328Z] Copying: 379/1024 [MB] (24 MBps) [2024-11-27T22:03:45.270Z] Copying: 394/1024 [MB] (15 MBps) [2024-11-27T22:03:46.215Z] Copying: 419/1024 [MB] (25 MBps) [2024-11-27T22:03:47.160Z] Copying: 439/1024 [MB] (19 MBps) [2024-11-27T22:03:48.106Z] Copying: 450/1024 [MB] (10 MBps) [2024-11-27T22:03:49.495Z] Copying: 461/1024 [MB] (11 MBps) [2024-11-27T22:03:50.066Z] Copying: 472/1024 [MB] (10 MBps) [2024-11-27T22:03:51.453Z] Copying: 483/1024 [MB] (10 MBps) [2024-11-27T22:03:52.400Z] Copying: 498/1024 [MB] (14 MBps) [2024-11-27T22:03:53.344Z] Copying: 515/1024 [MB] (16 MBps) [2024-11-27T22:03:54.290Z] Copying: 528/1024 [MB] (13 MBps) [2024-11-27T22:03:55.233Z] Copying: 544/1024 [MB] (15 MBps) [2024-11-27T22:03:56.176Z] Copying: 566/1024 [MB] (22 MBps) [2024-11-27T22:03:57.122Z] Copying: 581/1024 [MB] (14 MBps) [2024-11-27T22:03:58.065Z] Copying: 597/1024 [MB] (16 MBps) [2024-11-27T22:03:59.453Z] Copying: 615/1024 [MB] (17 MBps) [2024-11-27T22:04:00.397Z] Copying: 637/1024 [MB] (22 MBps) [2024-11-27T22:04:01.343Z] Copying: 654/1024 [MB] (17 MBps) [2024-11-27T22:04:02.283Z] Copying: 677/1024 [MB] (22 MBps) [2024-11-27T22:04:03.228Z] Copying: 700/1024 [MB] (23 MBps) [2024-11-27T22:04:04.172Z] Copying: 718/1024 [MB] (17 MBps) [2024-11-27T22:04:05.115Z] Copying: 737/1024 [MB] (19 MBps) [2024-11-27T22:04:06.498Z] Copying: 754/1024 [MB] (16 MBps) [2024-11-27T22:04:07.072Z] Copying: 771/1024 [MB] (17 MBps) [2024-11-27T22:04:08.460Z] Copying: 786/1024 [MB] (14 MBps) [2024-11-27T22:04:09.413Z] Copying: 796/1024 [MB] (10 MBps) [2024-11-27T22:04:10.362Z] Copying: 807/1024 [MB] (10 MBps) [2024-11-27T22:04:11.306Z] Copying: 818/1024 [MB] (10 MBps) [2024-11-27T22:04:12.263Z] Copying: 829/1024 [MB] (10 MBps) [2024-11-27T22:04:13.208Z] Copying: 842/1024 [MB] (13 MBps) [2024-11-27T22:04:14.150Z] Copying: 856/1024 [MB] (13 MBps) [2024-11-27T22:04:15.090Z] Copying: 878/1024 [MB] (21 MBps) [2024-11-27T22:04:16.478Z] Copying: 898/1024 [MB] (20 MBps) [2024-11-27T22:04:17.424Z] Copying: 916/1024 [MB] (18 MBps) [2024-11-27T22:04:18.423Z] Copying: 937/1024 [MB] (20 MBps) [2024-11-27T22:04:19.368Z] Copying: 950/1024 [MB] (12 MBps) [2024-11-27T22:04:20.312Z] Copying: 962/1024 [MB] (12 MBps) [2024-11-27T22:04:21.258Z] Copying: 977/1024 [MB] (15 MBps) [2024-11-27T22:04:22.203Z] Copying: 995/1024 [MB] (18 MBps) [2024-11-27T22:04:22.777Z] Copying: 1014/1024 [MB] (19 MBps) [2024-11-27T22:04:23.351Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-27 22:04:23.153954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.230 [2024-11-27 22:04:23.154051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:00.230 [2024-11-27 22:04:23.154070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:00.230 [2024-11-27 22:04:23.154081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.230 [2024-11-27 22:04:23.154121] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:00.230 [2024-11-27 22:04:23.154963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.230 [2024-11-27 22:04:23.155000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:00.230 [2024-11-27 22:04:23.155013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.825 ms 00:32:00.230 [2024-11-27 22:04:23.155024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.230 [2024-11-27 22:04:23.155271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.230 [2024-11-27 22:04:23.155282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:00.230 [2024-11-27 22:04:23.155291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:32:00.230 [2024-11-27 22:04:23.155301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.230 [2024-11-27 22:04:23.155364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.230 [2024-11-27 22:04:23.155375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:00.230 [2024-11-27 22:04:23.155384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:00.230 [2024-11-27 22:04:23.155392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.230 [2024-11-27 22:04:23.155464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.230 [2024-11-27 22:04:23.155479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:00.230 [2024-11-27 22:04:23.155489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:32:00.230 [2024-11-27 22:04:23.155497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.230 [2024-11-27 22:04:23.155512] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:00.230 [2024-11-27 22:04:23.155526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:00.230 [2024-11-27 22:04:23.155539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:00.230 [2024-11-27 22:04:23.155547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:00.230 [2024-11-27 22:04:23.155556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:00.230 [2024-11-27 22:04:23.155563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:00.230 [2024-11-27 22:04:23.155571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:00.230 [2024-11-27 22:04:23.155579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:00.230 [2024-11-27 22:04:23.155589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:00.230 [2024-11-27 22:04:23.155597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:00.230 [2024-11-27 22:04:23.155606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:00.230 [2024-11-27 22:04:23.155616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:00.230 [2024-11-27 22:04:23.155624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:00.230 [2024-11-27 22:04:23.155632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:00.230 [2024-11-27 22:04:23.155639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:00.230 [2024-11-27 22:04:23.155651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:00.230 [2024-11-27 22:04:23.155659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:00.230 [2024-11-27 22:04:23.155666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:00.230 [2024-11-27 22:04:23.155674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:00.230 [2024-11-27 22:04:23.155682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:00.230 [2024-11-27 22:04:23.155690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:00.230 [2024-11-27 22:04:23.155699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:00.230 [2024-11-27 22:04:23.155706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:00.230 [2024-11-27 22:04:23.155713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:00.230 [2024-11-27 22:04:23.155721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:00.230 [2024-11-27 22:04:23.155729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:00.230 [2024-11-27 22:04:23.155737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:00.230 [2024-11-27 22:04:23.155744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:00.230 [2024-11-27 22:04:23.155751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:00.230 [2024-11-27 22:04:23.155759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:00.230 [2024-11-27 22:04:23.155766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:00.230 [2024-11-27 22:04:23.155773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:00.230 [2024-11-27 22:04:23.155782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:00.230 [2024-11-27 22:04:23.155789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:00.230 [2024-11-27 22:04:23.155797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:00.230 [2024-11-27 22:04:23.155805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:00.230 [2024-11-27 22:04:23.155812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:00.230 [2024-11-27 22:04:23.155819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:00.230 [2024-11-27 22:04:23.155827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:00.230 [2024-11-27 22:04:23.155835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:00.230 [2024-11-27 22:04:23.155842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:00.230 [2024-11-27 22:04:23.155850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:00.230 [2024-11-27 22:04:23.155857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:00.230 [2024-11-27 22:04:23.155865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:00.230 [2024-11-27 22:04:23.155874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.155881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.155888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.155896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.155904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.155911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.155919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.155926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.155934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.155942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.155952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.155959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.155967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.155974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.155981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.155989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.155997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.156004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.156012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.156020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.156027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.156036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.156044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.156051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.156059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.156066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.156073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.156084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.156093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.156101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.156108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.156115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.156123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.156131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.156138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.156146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.156154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.156162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.156169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.156176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.156184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.156191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.156200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.156208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.156224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.156231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.156238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.156245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.156253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.156260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.156268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.156276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.156284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.156292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.156300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.156307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.156315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:00.231 [2024-11-27 22:04:23.156331] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:00.231 [2024-11-27 22:04:23.156357] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 97042294-04eb-4fc0-af05-f54f26f34808 00:32:00.231 [2024-11-27 22:04:23.156366] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:00.231 [2024-11-27 22:04:23.156375] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:32:00.231 [2024-11-27 22:04:23.156383] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:00.231 [2024-11-27 22:04:23.156393] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:00.231 [2024-11-27 22:04:23.156410] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:00.231 [2024-11-27 22:04:23.156423] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:00.231 [2024-11-27 22:04:23.156431] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:00.231 [2024-11-27 22:04:23.156437] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:00.231 [2024-11-27 22:04:23.156445] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:00.231 [2024-11-27 22:04:23.156453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.231 [2024-11-27 22:04:23.156472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:00.231 [2024-11-27 22:04:23.156481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.942 ms 00:32:00.231 [2024-11-27 22:04:23.156490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.231 [2024-11-27 22:04:23.159161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.231 [2024-11-27 22:04:23.159199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:00.231 [2024-11-27 22:04:23.159212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.655 ms 00:32:00.231 [2024-11-27 22:04:23.159221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.231 [2024-11-27 22:04:23.159372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.231 [2024-11-27 22:04:23.159382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:00.231 [2024-11-27 22:04:23.159400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:32:00.231 [2024-11-27 22:04:23.159409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.231 [2024-11-27 22:04:23.168343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:00.231 [2024-11-27 22:04:23.168391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:00.231 [2024-11-27 22:04:23.168403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:00.231 [2024-11-27 22:04:23.168413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.231 [2024-11-27 22:04:23.168494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:00.231 [2024-11-27 22:04:23.168504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:00.231 [2024-11-27 22:04:23.168516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:00.231 [2024-11-27 22:04:23.168529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.231 [2024-11-27 22:04:23.168601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:00.231 [2024-11-27 22:04:23.168613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:00.231 [2024-11-27 22:04:23.168623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:00.231 [2024-11-27 22:04:23.168632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.231 [2024-11-27 22:04:23.168651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:00.231 [2024-11-27 22:04:23.168661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:00.231 [2024-11-27 22:04:23.168670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:00.231 [2024-11-27 22:04:23.168713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.231 [2024-11-27 22:04:23.182254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:00.232 [2024-11-27 22:04:23.182501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:00.232 [2024-11-27 22:04:23.182522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:00.232 [2024-11-27 22:04:23.182531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.232 [2024-11-27 22:04:23.195427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:00.232 [2024-11-27 22:04:23.195476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:00.232 [2024-11-27 22:04:23.195495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:00.232 [2024-11-27 22:04:23.195503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.232 [2024-11-27 22:04:23.195559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:00.232 [2024-11-27 22:04:23.195569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:00.232 [2024-11-27 22:04:23.195578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:00.232 [2024-11-27 22:04:23.195586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.232 [2024-11-27 22:04:23.195626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:00.232 [2024-11-27 22:04:23.195636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:00.232 [2024-11-27 22:04:23.195644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:00.232 [2024-11-27 22:04:23.195653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.232 [2024-11-27 22:04:23.195714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:00.232 [2024-11-27 22:04:23.195724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:00.232 [2024-11-27 22:04:23.195732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:00.232 [2024-11-27 22:04:23.195740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.232 [2024-11-27 22:04:23.195769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:00.232 [2024-11-27 22:04:23.195778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:00.232 [2024-11-27 22:04:23.195786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:00.232 [2024-11-27 22:04:23.195794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.232 [2024-11-27 22:04:23.195837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:00.232 [2024-11-27 22:04:23.195847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:00.232 [2024-11-27 22:04:23.195855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:00.232 [2024-11-27 22:04:23.195863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.232 [2024-11-27 22:04:23.195906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:00.232 [2024-11-27 22:04:23.195916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:00.232 [2024-11-27 22:04:23.195926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:00.232 [2024-11-27 22:04:23.195934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.232 [2024-11-27 22:04:23.196071] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 42.083 ms, result 0 00:32:00.494 00:32:00.494 00:32:00.494 22:04:23 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:03.043 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:32:03.043 22:04:25 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:32:03.043 [2024-11-27 22:04:25.718158] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:32:03.043 [2024-11-27 22:04:25.718523] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95630 ] 00:32:03.043 [2024-11-27 22:04:25.863761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:03.043 [2024-11-27 22:04:25.895559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:03.043 [2024-11-27 22:04:26.020802] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:03.043 [2024-11-27 22:04:26.021069] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:03.307 [2024-11-27 22:04:26.182271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.307 [2024-11-27 22:04:26.182514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:03.307 [2024-11-27 22:04:26.182547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:03.307 [2024-11-27 22:04:26.182556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.307 [2024-11-27 22:04:26.182633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.307 [2024-11-27 22:04:26.182650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:03.307 [2024-11-27 22:04:26.182660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:32:03.307 [2024-11-27 22:04:26.182674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.307 [2024-11-27 22:04:26.182707] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:03.307 [2024-11-27 22:04:26.182968] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:03.307 [2024-11-27 22:04:26.182990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.307 [2024-11-27 22:04:26.183000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:03.307 [2024-11-27 22:04:26.183011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:32:03.307 [2024-11-27 22:04:26.183020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.307 [2024-11-27 22:04:26.183321] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:03.307 [2024-11-27 22:04:26.183368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.307 [2024-11-27 22:04:26.183379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:03.307 [2024-11-27 22:04:26.183389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:32:03.307 [2024-11-27 22:04:26.183402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.307 [2024-11-27 22:04:26.183459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.307 [2024-11-27 22:04:26.183470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:03.307 [2024-11-27 22:04:26.183479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:32:03.307 [2024-11-27 22:04:26.183491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.307 [2024-11-27 22:04:26.183752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.307 [2024-11-27 22:04:26.183765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:03.307 [2024-11-27 22:04:26.183774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:32:03.307 [2024-11-27 22:04:26.183781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.307 [2024-11-27 22:04:26.183866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.307 [2024-11-27 22:04:26.183877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:03.307 [2024-11-27 22:04:26.183887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:32:03.307 [2024-11-27 22:04:26.183895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.307 [2024-11-27 22:04:26.183921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.307 [2024-11-27 22:04:26.183932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:03.307 [2024-11-27 22:04:26.183941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:03.307 [2024-11-27 22:04:26.183949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.307 [2024-11-27 22:04:26.183974] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:03.307 [2024-11-27 22:04:26.186119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.307 [2024-11-27 22:04:26.186293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:03.307 [2024-11-27 22:04:26.186310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.147 ms 00:32:03.307 [2024-11-27 22:04:26.186318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.307 [2024-11-27 22:04:26.186375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.307 [2024-11-27 22:04:26.186385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:03.307 [2024-11-27 22:04:26.186394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:32:03.307 [2024-11-27 22:04:26.186402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.307 [2024-11-27 22:04:26.186458] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:03.307 [2024-11-27 22:04:26.186485] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:03.307 [2024-11-27 22:04:26.186527] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:03.307 [2024-11-27 22:04:26.186546] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:03.307 [2024-11-27 22:04:26.186663] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:03.307 [2024-11-27 22:04:26.186675] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:03.307 [2024-11-27 22:04:26.186689] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:03.307 [2024-11-27 22:04:26.186703] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:03.307 [2024-11-27 22:04:26.186720] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:03.307 [2024-11-27 22:04:26.186729] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:03.307 [2024-11-27 22:04:26.186737] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:03.307 [2024-11-27 22:04:26.186744] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:03.307 [2024-11-27 22:04:26.186752] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:03.307 [2024-11-27 22:04:26.186763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.307 [2024-11-27 22:04:26.186779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:03.307 [2024-11-27 22:04:26.186787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:32:03.307 [2024-11-27 22:04:26.186795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.307 [2024-11-27 22:04:26.186877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.307 [2024-11-27 22:04:26.186886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:03.307 [2024-11-27 22:04:26.186896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:32:03.307 [2024-11-27 22:04:26.186907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.307 [2024-11-27 22:04:26.187009] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:03.307 [2024-11-27 22:04:26.187020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:03.307 [2024-11-27 22:04:26.187031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:03.307 [2024-11-27 22:04:26.187039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:03.307 [2024-11-27 22:04:26.187047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:03.307 [2024-11-27 22:04:26.187054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:03.307 [2024-11-27 22:04:26.187061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:03.307 [2024-11-27 22:04:26.187068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:03.307 [2024-11-27 22:04:26.187076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:03.307 [2024-11-27 22:04:26.187083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:03.307 [2024-11-27 22:04:26.187090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:03.307 [2024-11-27 22:04:26.187097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:03.307 [2024-11-27 22:04:26.187106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:03.307 [2024-11-27 22:04:26.187113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:03.307 [2024-11-27 22:04:26.187121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:03.307 [2024-11-27 22:04:26.187128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:03.307 [2024-11-27 22:04:26.187135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:03.307 [2024-11-27 22:04:26.187143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:03.307 [2024-11-27 22:04:26.187151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:03.307 [2024-11-27 22:04:26.187159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:03.307 [2024-11-27 22:04:26.187165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:03.307 [2024-11-27 22:04:26.187172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:03.307 [2024-11-27 22:04:26.187178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:03.307 [2024-11-27 22:04:26.187185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:03.307 [2024-11-27 22:04:26.187192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:03.307 [2024-11-27 22:04:26.187199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:03.307 [2024-11-27 22:04:26.187206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:03.307 [2024-11-27 22:04:26.187212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:03.307 [2024-11-27 22:04:26.187220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:03.308 [2024-11-27 22:04:26.187227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:03.308 [2024-11-27 22:04:26.187235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:03.308 [2024-11-27 22:04:26.187242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:03.308 [2024-11-27 22:04:26.187250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:03.308 [2024-11-27 22:04:26.187258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:03.308 [2024-11-27 22:04:26.187270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:03.308 [2024-11-27 22:04:26.187278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:03.308 [2024-11-27 22:04:26.187286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:03.308 [2024-11-27 22:04:26.187293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:03.308 [2024-11-27 22:04:26.187301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:03.308 [2024-11-27 22:04:26.187308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:03.308 [2024-11-27 22:04:26.187316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:03.308 [2024-11-27 22:04:26.187324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:03.308 [2024-11-27 22:04:26.187332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:03.308 [2024-11-27 22:04:26.187356] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:03.308 [2024-11-27 22:04:26.187367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:03.308 [2024-11-27 22:04:26.187376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:03.308 [2024-11-27 22:04:26.187387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:03.308 [2024-11-27 22:04:26.187396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:03.308 [2024-11-27 22:04:26.187404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:03.308 [2024-11-27 22:04:26.187413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:03.308 [2024-11-27 22:04:26.187423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:03.308 [2024-11-27 22:04:26.187431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:03.308 [2024-11-27 22:04:26.187439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:03.308 [2024-11-27 22:04:26.187449] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:03.308 [2024-11-27 22:04:26.187463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:03.308 [2024-11-27 22:04:26.187473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:03.308 [2024-11-27 22:04:26.187482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:03.308 [2024-11-27 22:04:26.187491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:03.308 [2024-11-27 22:04:26.187499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:03.308 [2024-11-27 22:04:26.187520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:03.308 [2024-11-27 22:04:26.187528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:03.308 [2024-11-27 22:04:26.187536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:03.308 [2024-11-27 22:04:26.187544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:03.308 [2024-11-27 22:04:26.187552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:03.308 [2024-11-27 22:04:26.187561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:03.308 [2024-11-27 22:04:26.187569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:03.308 [2024-11-27 22:04:26.187585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:03.308 [2024-11-27 22:04:26.187593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:03.308 [2024-11-27 22:04:26.187602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:03.308 [2024-11-27 22:04:26.187610] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:03.308 [2024-11-27 22:04:26.187619] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:03.308 [2024-11-27 22:04:26.187630] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:03.308 [2024-11-27 22:04:26.187639] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:03.308 [2024-11-27 22:04:26.187648] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:03.308 [2024-11-27 22:04:26.187656] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:03.308 [2024-11-27 22:04:26.187665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.308 [2024-11-27 22:04:26.187675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:03.308 [2024-11-27 22:04:26.187685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.725 ms 00:32:03.308 [2024-11-27 22:04:26.187693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.308 [2024-11-27 22:04:26.197542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.308 [2024-11-27 22:04:26.197586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:03.308 [2024-11-27 22:04:26.197596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.805 ms 00:32:03.308 [2024-11-27 22:04:26.197604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.308 [2024-11-27 22:04:26.197689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.308 [2024-11-27 22:04:26.197698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:03.308 [2024-11-27 22:04:26.197707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:32:03.308 [2024-11-27 22:04:26.197715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.308 [2024-11-27 22:04:26.220167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.308 [2024-11-27 22:04:26.220439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:03.308 [2024-11-27 22:04:26.220468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.393 ms 00:32:03.308 [2024-11-27 22:04:26.220491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.308 [2024-11-27 22:04:26.220554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.308 [2024-11-27 22:04:26.220570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:03.308 [2024-11-27 22:04:26.220585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:03.308 [2024-11-27 22:04:26.220598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.308 [2024-11-27 22:04:26.220770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.308 [2024-11-27 22:04:26.220800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:03.308 [2024-11-27 22:04:26.220814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:32:03.308 [2024-11-27 22:04:26.220826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.308 [2024-11-27 22:04:26.221013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.308 [2024-11-27 22:04:26.221044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:03.308 [2024-11-27 22:04:26.221059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:32:03.308 [2024-11-27 22:04:26.221073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.308 [2024-11-27 22:04:26.229261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.308 [2024-11-27 22:04:26.229308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:03.308 [2024-11-27 22:04:26.229326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.159 ms 00:32:03.308 [2024-11-27 22:04:26.229349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.308 [2024-11-27 22:04:26.229470] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:03.308 [2024-11-27 22:04:26.229487] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:03.308 [2024-11-27 22:04:26.229498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.308 [2024-11-27 22:04:26.229507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:03.308 [2024-11-27 22:04:26.229518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:32:03.308 [2024-11-27 22:04:26.229528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.308 [2024-11-27 22:04:26.241817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.308 [2024-11-27 22:04:26.241863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:03.308 [2024-11-27 22:04:26.241880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.270 ms 00:32:03.308 [2024-11-27 22:04:26.241888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.308 [2024-11-27 22:04:26.242021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.308 [2024-11-27 22:04:26.242032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:03.308 [2024-11-27 22:04:26.242040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:32:03.308 [2024-11-27 22:04:26.242053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.308 [2024-11-27 22:04:26.242101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.308 [2024-11-27 22:04:26.242115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:03.308 [2024-11-27 22:04:26.242124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:32:03.308 [2024-11-27 22:04:26.242131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.308 [2024-11-27 22:04:26.242480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.308 [2024-11-27 22:04:26.242500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:03.308 [2024-11-27 22:04:26.242514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:32:03.308 [2024-11-27 22:04:26.242528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.309 [2024-11-27 22:04:26.242546] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:03.309 [2024-11-27 22:04:26.242557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.309 [2024-11-27 22:04:26.242568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:03.309 [2024-11-27 22:04:26.242580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:03.309 [2024-11-27 22:04:26.242588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.309 [2024-11-27 22:04:26.251836] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:03.309 [2024-11-27 22:04:26.252125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.309 [2024-11-27 22:04:26.252140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:03.309 [2024-11-27 22:04:26.252150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.518 ms 00:32:03.309 [2024-11-27 22:04:26.252157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.309 [2024-11-27 22:04:26.254578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.309 [2024-11-27 22:04:26.254610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:03.309 [2024-11-27 22:04:26.254625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.392 ms 00:32:03.309 [2024-11-27 22:04:26.254634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.309 [2024-11-27 22:04:26.254729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.309 [2024-11-27 22:04:26.254740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:03.309 [2024-11-27 22:04:26.254754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:32:03.309 [2024-11-27 22:04:26.254765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.309 [2024-11-27 22:04:26.254795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.309 [2024-11-27 22:04:26.254804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:03.309 [2024-11-27 22:04:26.254811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:03.309 [2024-11-27 22:04:26.254819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.309 [2024-11-27 22:04:26.254852] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:03.309 [2024-11-27 22:04:26.254862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.309 [2024-11-27 22:04:26.254872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:03.309 [2024-11-27 22:04:26.254880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:32:03.309 [2024-11-27 22:04:26.254887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.309 [2024-11-27 22:04:26.261237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.309 [2024-11-27 22:04:26.261447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:03.309 [2024-11-27 22:04:26.261521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.323 ms 00:32:03.309 [2024-11-27 22:04:26.261546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.309 [2024-11-27 22:04:26.261636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.309 [2024-11-27 22:04:26.261662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:03.309 [2024-11-27 22:04:26.261682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:32:03.309 [2024-11-27 22:04:26.261709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.309 [2024-11-27 22:04:26.262996] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 80.282 ms, result 0 00:32:04.256  [2024-11-27T22:04:28.320Z] Copying: 15/1024 [MB] (15 MBps) [2024-11-27T22:04:29.705Z] Copying: 31/1024 [MB] (16 MBps) [2024-11-27T22:04:30.644Z] Copying: 48/1024 [MB] (17 MBps) [2024-11-27T22:04:31.589Z] Copying: 74/1024 [MB] (25 MBps) [2024-11-27T22:04:32.531Z] Copying: 85/1024 [MB] (11 MBps) [2024-11-27T22:04:33.473Z] Copying: 98/1024 [MB] (12 MBps) [2024-11-27T22:04:34.414Z] Copying: 115/1024 [MB] (17 MBps) [2024-11-27T22:04:35.356Z] Copying: 129/1024 [MB] (13 MBps) [2024-11-27T22:04:36.298Z] Copying: 145/1024 [MB] (15 MBps) [2024-11-27T22:04:37.687Z] Copying: 162/1024 [MB] (17 MBps) [2024-11-27T22:04:38.632Z] Copying: 176/1024 [MB] (13 MBps) [2024-11-27T22:04:39.578Z] Copying: 189/1024 [MB] (13 MBps) [2024-11-27T22:04:40.523Z] Copying: 202/1024 [MB] (12 MBps) [2024-11-27T22:04:41.468Z] Copying: 219/1024 [MB] (17 MBps) [2024-11-27T22:04:42.410Z] Copying: 236/1024 [MB] (16 MBps) [2024-11-27T22:04:43.354Z] Copying: 250/1024 [MB] (13 MBps) [2024-11-27T22:04:44.296Z] Copying: 264/1024 [MB] (14 MBps) [2024-11-27T22:04:45.684Z] Copying: 275/1024 [MB] (10 MBps) [2024-11-27T22:04:46.629Z] Copying: 286/1024 [MB] (11 MBps) [2024-11-27T22:04:47.574Z] Copying: 296/1024 [MB] (10 MBps) [2024-11-27T22:04:48.639Z] Copying: 306/1024 [MB] (10 MBps) [2024-11-27T22:04:49.582Z] Copying: 317/1024 [MB] (10 MBps) [2024-11-27T22:04:50.522Z] Copying: 328/1024 [MB] (10 MBps) [2024-11-27T22:04:51.464Z] Copying: 365/1024 [MB] (37 MBps) [2024-11-27T22:04:52.407Z] Copying: 376/1024 [MB] (10 MBps) [2024-11-27T22:04:53.352Z] Copying: 389/1024 [MB] (13 MBps) [2024-11-27T22:04:54.295Z] Copying: 402/1024 [MB] (12 MBps) [2024-11-27T22:04:55.679Z] Copying: 414/1024 [MB] (11 MBps) [2024-11-27T22:04:56.625Z] Copying: 428/1024 [MB] (14 MBps) [2024-11-27T22:04:57.572Z] Copying: 443/1024 [MB] (15 MBps) [2024-11-27T22:04:58.518Z] Copying: 457/1024 [MB] (14 MBps) [2024-11-27T22:04:59.461Z] Copying: 473/1024 [MB] (15 MBps) [2024-11-27T22:05:00.405Z] Copying: 489/1024 [MB] (15 MBps) [2024-11-27T22:05:01.351Z] Copying: 504/1024 [MB] (15 MBps) [2024-11-27T22:05:02.296Z] Copying: 517/1024 [MB] (12 MBps) [2024-11-27T22:05:03.684Z] Copying: 529/1024 [MB] (12 MBps) [2024-11-27T22:05:04.629Z] Copying: 542/1024 [MB] (13 MBps) [2024-11-27T22:05:05.575Z] Copying: 556/1024 [MB] (14 MBps) [2024-11-27T22:05:06.521Z] Copying: 570/1024 [MB] (13 MBps) [2024-11-27T22:05:07.467Z] Copying: 584/1024 [MB] (13 MBps) [2024-11-27T22:05:08.411Z] Copying: 596/1024 [MB] (11 MBps) [2024-11-27T22:05:09.371Z] Copying: 612/1024 [MB] (16 MBps) [2024-11-27T22:05:10.322Z] Copying: 631/1024 [MB] (18 MBps) [2024-11-27T22:05:11.714Z] Copying: 648/1024 [MB] (17 MBps) [2024-11-27T22:05:12.289Z] Copying: 659/1024 [MB] (10 MBps) [2024-11-27T22:05:13.680Z] Copying: 669/1024 [MB] (10 MBps) [2024-11-27T22:05:14.628Z] Copying: 680/1024 [MB] (10 MBps) [2024-11-27T22:05:15.574Z] Copying: 691/1024 [MB] (10 MBps) [2024-11-27T22:05:16.521Z] Copying: 702/1024 [MB] (10 MBps) [2024-11-27T22:05:17.466Z] Copying: 712/1024 [MB] (10 MBps) [2024-11-27T22:05:18.410Z] Copying: 722/1024 [MB] (10 MBps) [2024-11-27T22:05:19.353Z] Copying: 732/1024 [MB] (10 MBps) [2024-11-27T22:05:20.298Z] Copying: 743/1024 [MB] (11 MBps) [2024-11-27T22:05:21.361Z] Copying: 754/1024 [MB] (10 MBps) [2024-11-27T22:05:22.301Z] Copying: 765/1024 [MB] (11 MBps) [2024-11-27T22:05:23.692Z] Copying: 784/1024 [MB] (18 MBps) [2024-11-27T22:05:24.639Z] Copying: 795/1024 [MB] (11 MBps) [2024-11-27T22:05:25.584Z] Copying: 809/1024 [MB] (13 MBps) [2024-11-27T22:05:26.528Z] Copying: 820/1024 [MB] (10 MBps) [2024-11-27T22:05:27.473Z] Copying: 830/1024 [MB] (10 MBps) [2024-11-27T22:05:28.417Z] Copying: 842/1024 [MB] (12 MBps) [2024-11-27T22:05:29.361Z] Copying: 858/1024 [MB] (15 MBps) [2024-11-27T22:05:30.307Z] Copying: 869/1024 [MB] (11 MBps) [2024-11-27T22:05:31.698Z] Copying: 881/1024 [MB] (11 MBps) [2024-11-27T22:05:32.644Z] Copying: 892/1024 [MB] (11 MBps) [2024-11-27T22:05:33.590Z] Copying: 903/1024 [MB] (10 MBps) [2024-11-27T22:05:34.535Z] Copying: 913/1024 [MB] (10 MBps) [2024-11-27T22:05:35.479Z] Copying: 930/1024 [MB] (16 MBps) [2024-11-27T22:05:36.422Z] Copying: 943/1024 [MB] (13 MBps) [2024-11-27T22:05:37.364Z] Copying: 957/1024 [MB] (14 MBps) [2024-11-27T22:05:38.307Z] Copying: 972/1024 [MB] (14 MBps) [2024-11-27T22:05:39.694Z] Copying: 988/1024 [MB] (16 MBps) [2024-11-27T22:05:40.640Z] Copying: 1004/1024 [MB] (16 MBps) [2024-11-27T22:05:41.586Z] Copying: 1020/1024 [MB] (15 MBps) [2024-11-27T22:05:41.586Z] Copying: 1048400/1048576 [kB] (3332 kBps) [2024-11-27T22:05:41.586Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-11-27 22:05:41.497513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:18.465 [2024-11-27 22:05:41.497584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:18.465 [2024-11-27 22:05:41.497602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:18.465 [2024-11-27 22:05:41.497619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.465 [2024-11-27 22:05:41.499849] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:18.465 [2024-11-27 22:05:41.503030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:18.465 [2024-11-27 22:05:41.503082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:18.465 [2024-11-27 22:05:41.503094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.126 ms 00:33:18.465 [2024-11-27 22:05:41.503104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.465 [2024-11-27 22:05:41.513118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:18.465 [2024-11-27 22:05:41.513173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:18.465 [2024-11-27 22:05:41.513187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.791 ms 00:33:18.465 [2024-11-27 22:05:41.513196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.465 [2024-11-27 22:05:41.513233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:18.465 [2024-11-27 22:05:41.513252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:18.465 [2024-11-27 22:05:41.513265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:18.465 [2024-11-27 22:05:41.513273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.465 [2024-11-27 22:05:41.513332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:18.465 [2024-11-27 22:05:41.513367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:18.465 [2024-11-27 22:05:41.513376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:33:18.465 [2024-11-27 22:05:41.513384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.465 [2024-11-27 22:05:41.513399] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:18.465 [2024-11-27 22:05:41.513412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 127744 / 261120 wr_cnt: 1 state: open 00:33:18.465 [2024-11-27 22:05:41.513422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:18.465 [2024-11-27 22:05:41.513430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:18.465 [2024-11-27 22:05:41.513439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:18.465 [2024-11-27 22:05:41.513447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:18.465 [2024-11-27 22:05:41.513455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:18.465 [2024-11-27 22:05:41.513463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:18.465 [2024-11-27 22:05:41.513472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:18.465 [2024-11-27 22:05:41.513480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:18.465 [2024-11-27 22:05:41.513487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:18.465 [2024-11-27 22:05:41.513495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:18.465 [2024-11-27 22:05:41.513530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:18.465 [2024-11-27 22:05:41.513539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.513995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.514009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.514017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.514024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.514031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.514038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.514046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.514054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.514061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.514069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.514077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.514084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.514092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.514099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.514107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.514115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.514123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.514132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.514151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.514159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.514168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.514176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.514184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.514192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.514199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.514208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.514216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.514224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.514231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.514239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.514247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:18.466 [2024-11-27 22:05:41.514264] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:18.466 [2024-11-27 22:05:41.514277] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 97042294-04eb-4fc0-af05-f54f26f34808 00:33:18.466 [2024-11-27 22:05:41.514286] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 127744 00:33:18.467 [2024-11-27 22:05:41.514293] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 127776 00:33:18.467 [2024-11-27 22:05:41.514301] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 127744 00:33:18.467 [2024-11-27 22:05:41.514309] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0003 00:33:18.467 [2024-11-27 22:05:41.514318] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:18.467 [2024-11-27 22:05:41.514326] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:18.467 [2024-11-27 22:05:41.514345] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:18.467 [2024-11-27 22:05:41.514353] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:18.467 [2024-11-27 22:05:41.514362] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:18.467 [2024-11-27 22:05:41.514370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:18.467 [2024-11-27 22:05:41.514378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:18.467 [2024-11-27 22:05:41.514386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.971 ms 00:33:18.467 [2024-11-27 22:05:41.514394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.467 [2024-11-27 22:05:41.516794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:18.467 [2024-11-27 22:05:41.516966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:18.467 [2024-11-27 22:05:41.516996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.383 ms 00:33:18.467 [2024-11-27 22:05:41.517007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.467 [2024-11-27 22:05:41.517125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:18.467 [2024-11-27 22:05:41.517134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:18.467 [2024-11-27 22:05:41.517143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:33:18.467 [2024-11-27 22:05:41.517150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.467 [2024-11-27 22:05:41.524409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:18.467 [2024-11-27 22:05:41.524453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:18.467 [2024-11-27 22:05:41.524463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:18.467 [2024-11-27 22:05:41.524471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.467 [2024-11-27 22:05:41.524531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:18.467 [2024-11-27 22:05:41.524539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:18.467 [2024-11-27 22:05:41.524547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:18.467 [2024-11-27 22:05:41.524554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.467 [2024-11-27 22:05:41.524590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:18.467 [2024-11-27 22:05:41.524601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:18.467 [2024-11-27 22:05:41.524613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:18.467 [2024-11-27 22:05:41.524644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.467 [2024-11-27 22:05:41.524662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:18.467 [2024-11-27 22:05:41.524671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:18.467 [2024-11-27 22:05:41.524679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:18.467 [2024-11-27 22:05:41.524687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.467 [2024-11-27 22:05:41.538069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:18.467 [2024-11-27 22:05:41.538123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:18.467 [2024-11-27 22:05:41.538142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:18.467 [2024-11-27 22:05:41.538150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.467 [2024-11-27 22:05:41.549038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:18.467 [2024-11-27 22:05:41.549088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:18.467 [2024-11-27 22:05:41.549100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:18.467 [2024-11-27 22:05:41.549108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.467 [2024-11-27 22:05:41.549156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:18.467 [2024-11-27 22:05:41.549165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:18.467 [2024-11-27 22:05:41.549174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:18.467 [2024-11-27 22:05:41.549195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.467 [2024-11-27 22:05:41.549230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:18.467 [2024-11-27 22:05:41.549240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:18.467 [2024-11-27 22:05:41.549249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:18.467 [2024-11-27 22:05:41.549257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.467 [2024-11-27 22:05:41.549317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:18.467 [2024-11-27 22:05:41.549327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:18.467 [2024-11-27 22:05:41.549380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:18.467 [2024-11-27 22:05:41.549392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.467 [2024-11-27 22:05:41.549418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:18.467 [2024-11-27 22:05:41.549428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:18.467 [2024-11-27 22:05:41.549441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:18.467 [2024-11-27 22:05:41.549449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.467 [2024-11-27 22:05:41.549491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:18.467 [2024-11-27 22:05:41.549501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:18.467 [2024-11-27 22:05:41.549510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:18.467 [2024-11-27 22:05:41.549519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.467 [2024-11-27 22:05:41.549569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:18.467 [2024-11-27 22:05:41.549580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:18.467 [2024-11-27 22:05:41.549589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:18.467 [2024-11-27 22:05:41.549597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:18.467 [2024-11-27 22:05:41.549743] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 53.506 ms, result 0 00:33:19.410 00:33:19.410 00:33:19.410 22:05:42 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:33:19.410 [2024-11-27 22:05:42.356731] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:33:19.410 [2024-11-27 22:05:42.356871] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96412 ] 00:33:19.410 [2024-11-27 22:05:42.503552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:19.672 [2024-11-27 22:05:42.532047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:19.672 [2024-11-27 22:05:42.651138] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:19.672 [2024-11-27 22:05:42.651225] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:19.934 [2024-11-27 22:05:42.811946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:19.934 [2024-11-27 22:05:42.812163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:19.934 [2024-11-27 22:05:42.812191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:19.934 [2024-11-27 22:05:42.812205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.934 [2024-11-27 22:05:42.812276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:19.934 [2024-11-27 22:05:42.812290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:19.934 [2024-11-27 22:05:42.812300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:33:19.934 [2024-11-27 22:05:42.812314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.934 [2024-11-27 22:05:42.812363] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:19.934 [2024-11-27 22:05:42.812641] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:19.934 [2024-11-27 22:05:42.812659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:19.935 [2024-11-27 22:05:42.812668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:19.935 [2024-11-27 22:05:42.812684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:33:19.935 [2024-11-27 22:05:42.812694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.935 [2024-11-27 22:05:42.813003] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:33:19.935 [2024-11-27 22:05:42.813031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:19.935 [2024-11-27 22:05:42.813045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:19.935 [2024-11-27 22:05:42.813056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:33:19.935 [2024-11-27 22:05:42.813067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.935 [2024-11-27 22:05:42.813123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:19.935 [2024-11-27 22:05:42.813132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:19.935 [2024-11-27 22:05:42.813141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:33:19.935 [2024-11-27 22:05:42.813149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.935 [2024-11-27 22:05:42.813415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:19.935 [2024-11-27 22:05:42.813427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:19.935 [2024-11-27 22:05:42.813436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:33:19.935 [2024-11-27 22:05:42.813449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.935 [2024-11-27 22:05:42.813540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:19.935 [2024-11-27 22:05:42.813554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:19.935 [2024-11-27 22:05:42.813565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:33:19.935 [2024-11-27 22:05:42.813573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.935 [2024-11-27 22:05:42.813597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:19.935 [2024-11-27 22:05:42.813607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:19.935 [2024-11-27 22:05:42.813615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:33:19.935 [2024-11-27 22:05:42.813628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.935 [2024-11-27 22:05:42.813654] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:19.935 [2024-11-27 22:05:42.815740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:19.935 [2024-11-27 22:05:42.815781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:19.935 [2024-11-27 22:05:42.815792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.091 ms 00:33:19.935 [2024-11-27 22:05:42.815800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.935 [2024-11-27 22:05:42.815833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:19.935 [2024-11-27 22:05:42.815842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:19.935 [2024-11-27 22:05:42.815856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:33:19.935 [2024-11-27 22:05:42.815864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.935 [2024-11-27 22:05:42.815913] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:19.935 [2024-11-27 22:05:42.815938] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:19.935 [2024-11-27 22:05:42.815979] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:19.935 [2024-11-27 22:05:42.815996] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:19.935 [2024-11-27 22:05:42.816108] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:19.935 [2024-11-27 22:05:42.816118] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:19.935 [2024-11-27 22:05:42.816133] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:19.935 [2024-11-27 22:05:42.816144] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:19.935 [2024-11-27 22:05:42.816158] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:19.935 [2024-11-27 22:05:42.816166] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:19.935 [2024-11-27 22:05:42.816174] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:19.935 [2024-11-27 22:05:42.816181] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:19.935 [2024-11-27 22:05:42.816189] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:19.935 [2024-11-27 22:05:42.816197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:19.935 [2024-11-27 22:05:42.816211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:19.935 [2024-11-27 22:05:42.816220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:33:19.935 [2024-11-27 22:05:42.816227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.935 [2024-11-27 22:05:42.816309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:19.935 [2024-11-27 22:05:42.816317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:19.935 [2024-11-27 22:05:42.816328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:33:19.935 [2024-11-27 22:05:42.816360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.935 [2024-11-27 22:05:42.816477] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:19.935 [2024-11-27 22:05:42.816490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:19.935 [2024-11-27 22:05:42.816500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:19.935 [2024-11-27 22:05:42.816510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:19.935 [2024-11-27 22:05:42.816521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:19.935 [2024-11-27 22:05:42.816530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:19.935 [2024-11-27 22:05:42.816538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:19.935 [2024-11-27 22:05:42.816546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:19.935 [2024-11-27 22:05:42.816556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:19.935 [2024-11-27 22:05:42.816564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:19.935 [2024-11-27 22:05:42.816572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:19.935 [2024-11-27 22:05:42.816581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:19.935 [2024-11-27 22:05:42.816589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:19.935 [2024-11-27 22:05:42.816598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:19.935 [2024-11-27 22:05:42.816606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:19.935 [2024-11-27 22:05:42.816613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:19.935 [2024-11-27 22:05:42.816635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:19.935 [2024-11-27 22:05:42.816643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:19.935 [2024-11-27 22:05:42.816651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:19.935 [2024-11-27 22:05:42.816659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:19.935 [2024-11-27 22:05:42.816667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:19.935 [2024-11-27 22:05:42.816675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:19.935 [2024-11-27 22:05:42.816684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:19.935 [2024-11-27 22:05:42.816692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:19.935 [2024-11-27 22:05:42.816702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:19.935 [2024-11-27 22:05:42.816710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:19.935 [2024-11-27 22:05:42.816718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:19.935 [2024-11-27 22:05:42.816725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:19.935 [2024-11-27 22:05:42.816733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:19.935 [2024-11-27 22:05:42.816741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:19.935 [2024-11-27 22:05:42.816749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:19.935 [2024-11-27 22:05:42.816757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:19.935 [2024-11-27 22:05:42.816765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:19.935 [2024-11-27 22:05:42.816773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:19.935 [2024-11-27 22:05:42.816779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:19.935 [2024-11-27 22:05:42.816786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:19.935 [2024-11-27 22:05:42.816793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:19.935 [2024-11-27 22:05:42.816799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:19.935 [2024-11-27 22:05:42.816806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:19.935 [2024-11-27 22:05:42.816813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:19.935 [2024-11-27 22:05:42.816822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:19.935 [2024-11-27 22:05:42.816829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:19.935 [2024-11-27 22:05:42.816835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:19.935 [2024-11-27 22:05:42.816844] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:19.935 [2024-11-27 22:05:42.816859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:19.935 [2024-11-27 22:05:42.816870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:19.935 [2024-11-27 22:05:42.816879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:19.935 [2024-11-27 22:05:42.816887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:19.935 [2024-11-27 22:05:42.816894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:19.935 [2024-11-27 22:05:42.816901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:19.936 [2024-11-27 22:05:42.816909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:19.936 [2024-11-27 22:05:42.816915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:19.936 [2024-11-27 22:05:42.816922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:19.936 [2024-11-27 22:05:42.816931] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:19.936 [2024-11-27 22:05:42.816940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:19.936 [2024-11-27 22:05:42.816949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:19.936 [2024-11-27 22:05:42.816958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:19.936 [2024-11-27 22:05:42.816966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:19.936 [2024-11-27 22:05:42.816974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:19.936 [2024-11-27 22:05:42.816981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:19.936 [2024-11-27 22:05:42.816988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:19.936 [2024-11-27 22:05:42.816995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:19.936 [2024-11-27 22:05:42.817002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:19.936 [2024-11-27 22:05:42.817008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:19.936 [2024-11-27 22:05:42.817015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:19.936 [2024-11-27 22:05:42.817022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:19.936 [2024-11-27 22:05:42.817034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:19.936 [2024-11-27 22:05:42.817041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:19.936 [2024-11-27 22:05:42.817049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:19.936 [2024-11-27 22:05:42.817055] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:19.936 [2024-11-27 22:05:42.817063] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:19.936 [2024-11-27 22:05:42.817071] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:19.936 [2024-11-27 22:05:42.817081] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:19.936 [2024-11-27 22:05:42.817087] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:19.936 [2024-11-27 22:05:42.817095] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:19.936 [2024-11-27 22:05:42.817104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:19.936 [2024-11-27 22:05:42.817112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:19.936 [2024-11-27 22:05:42.817120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.695 ms 00:33:19.936 [2024-11-27 22:05:42.817132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.936 [2024-11-27 22:05:42.826778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:19.936 [2024-11-27 22:05:42.826962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:19.936 [2024-11-27 22:05:42.826986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.605 ms 00:33:19.936 [2024-11-27 22:05:42.826998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.936 [2024-11-27 22:05:42.827090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:19.936 [2024-11-27 22:05:42.827099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:19.936 [2024-11-27 22:05:42.827108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:33:19.936 [2024-11-27 22:05:42.827116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.936 [2024-11-27 22:05:42.848317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:19.936 [2024-11-27 22:05:42.848407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:19.936 [2024-11-27 22:05:42.848425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.144 ms 00:33:19.936 [2024-11-27 22:05:42.848437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.936 [2024-11-27 22:05:42.848501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:19.936 [2024-11-27 22:05:42.848529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:19.936 [2024-11-27 22:05:42.848544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:19.936 [2024-11-27 22:05:42.848555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.936 [2024-11-27 22:05:42.848717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:19.936 [2024-11-27 22:05:42.848738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:19.936 [2024-11-27 22:05:42.848751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:33:19.936 [2024-11-27 22:05:42.848763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.936 [2024-11-27 22:05:42.848948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:19.936 [2024-11-27 22:05:42.848971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:19.936 [2024-11-27 22:05:42.848986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:33:19.936 [2024-11-27 22:05:42.849004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.936 [2024-11-27 22:05:42.857577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:19.936 [2024-11-27 22:05:42.857619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:19.936 [2024-11-27 22:05:42.857636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.539 ms 00:33:19.936 [2024-11-27 22:05:42.857646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.936 [2024-11-27 22:05:42.857759] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:33:19.936 [2024-11-27 22:05:42.857772] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:19.936 [2024-11-27 22:05:42.857781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:19.936 [2024-11-27 22:05:42.857794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:19.936 [2024-11-27 22:05:42.857802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:33:19.936 [2024-11-27 22:05:42.857812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.936 [2024-11-27 22:05:42.870105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:19.936 [2024-11-27 22:05:42.870145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:19.936 [2024-11-27 22:05:42.870157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.277 ms 00:33:19.936 [2024-11-27 22:05:42.870167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.936 [2024-11-27 22:05:42.870297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:19.936 [2024-11-27 22:05:42.870307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:19.936 [2024-11-27 22:05:42.870323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:33:19.936 [2024-11-27 22:05:42.870352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.936 [2024-11-27 22:05:42.870411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:19.936 [2024-11-27 22:05:42.870424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:19.936 [2024-11-27 22:05:42.870434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:33:19.936 [2024-11-27 22:05:42.870441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.936 [2024-11-27 22:05:42.870745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:19.936 [2024-11-27 22:05:42.870772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:19.936 [2024-11-27 22:05:42.870781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:33:19.936 [2024-11-27 22:05:42.870796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.936 [2024-11-27 22:05:42.870812] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:33:19.936 [2024-11-27 22:05:42.870822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:19.936 [2024-11-27 22:05:42.870832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:19.936 [2024-11-27 22:05:42.870839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:33:19.936 [2024-11-27 22:05:42.870850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.936 [2024-11-27 22:05:42.880036] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:19.936 [2024-11-27 22:05:42.880191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:19.936 [2024-11-27 22:05:42.880202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:19.936 [2024-11-27 22:05:42.880211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.323 ms 00:33:19.936 [2024-11-27 22:05:42.880220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.936 [2024-11-27 22:05:42.882712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:19.936 [2024-11-27 22:05:42.882883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:19.936 [2024-11-27 22:05:42.882909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.465 ms 00:33:19.936 [2024-11-27 22:05:42.882917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.936 [2024-11-27 22:05:42.882999] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:33:19.936 [2024-11-27 22:05:42.883601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:19.936 [2024-11-27 22:05:42.883631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:19.936 [2024-11-27 22:05:42.883643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.622 ms 00:33:19.936 [2024-11-27 22:05:42.883654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.936 [2024-11-27 22:05:42.883682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:19.936 [2024-11-27 22:05:42.883692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:19.936 [2024-11-27 22:05:42.883701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:19.937 [2024-11-27 22:05:42.883712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.937 [2024-11-27 22:05:42.883749] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:19.937 [2024-11-27 22:05:42.883765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:19.937 [2024-11-27 22:05:42.883773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:19.937 [2024-11-27 22:05:42.883782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:33:19.937 [2024-11-27 22:05:42.883793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.937 [2024-11-27 22:05:42.890116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:19.937 [2024-11-27 22:05:42.890285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:19.937 [2024-11-27 22:05:42.890304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.304 ms 00:33:19.937 [2024-11-27 22:05:42.890312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.937 [2024-11-27 22:05:42.890410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:19.937 [2024-11-27 22:05:42.890424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:19.937 [2024-11-27 22:05:42.890432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:33:19.937 [2024-11-27 22:05:42.890439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:19.937 [2024-11-27 22:05:42.891590] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 79.191 ms, result 0 00:33:21.323  [2024-11-27T22:05:45.388Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-27T22:05:46.334Z] Copying: 27/1024 [MB] (16 MBps) [2024-11-27T22:05:47.277Z] Copying: 39/1024 [MB] (12 MBps) [2024-11-27T22:05:48.219Z] Copying: 50/1024 [MB] (10 MBps) [2024-11-27T22:05:49.164Z] Copying: 61/1024 [MB] (10 MBps) [2024-11-27T22:05:50.108Z] Copying: 71/1024 [MB] (10 MBps) [2024-11-27T22:05:51.496Z] Copying: 82/1024 [MB] (10 MBps) [2024-11-27T22:05:52.115Z] Copying: 94/1024 [MB] (11 MBps) [2024-11-27T22:05:53.086Z] Copying: 106/1024 [MB] (12 MBps) [2024-11-27T22:05:54.474Z] Copying: 117/1024 [MB] (10 MBps) [2024-11-27T22:05:55.419Z] Copying: 127/1024 [MB] (10 MBps) [2024-11-27T22:05:56.367Z] Copying: 140/1024 [MB] (12 MBps) [2024-11-27T22:05:57.314Z] Copying: 152/1024 [MB] (12 MBps) [2024-11-27T22:05:58.257Z] Copying: 163/1024 [MB] (11 MBps) [2024-11-27T22:05:59.201Z] Copying: 176/1024 [MB] (12 MBps) [2024-11-27T22:06:00.144Z] Copying: 187/1024 [MB] (11 MBps) [2024-11-27T22:06:01.087Z] Copying: 197/1024 [MB] (10 MBps) [2024-11-27T22:06:02.476Z] Copying: 208/1024 [MB] (10 MBps) [2024-11-27T22:06:03.419Z] Copying: 220/1024 [MB] (12 MBps) [2024-11-27T22:06:04.363Z] Copying: 233/1024 [MB] (12 MBps) [2024-11-27T22:06:05.308Z] Copying: 245/1024 [MB] (12 MBps) [2024-11-27T22:06:06.251Z] Copying: 257/1024 [MB] (11 MBps) [2024-11-27T22:06:07.195Z] Copying: 268/1024 [MB] (11 MBps) [2024-11-27T22:06:08.139Z] Copying: 281/1024 [MB] (12 MBps) [2024-11-27T22:06:09.085Z] Copying: 293/1024 [MB] (12 MBps) [2024-11-27T22:06:10.484Z] Copying: 305/1024 [MB] (12 MBps) [2024-11-27T22:06:11.427Z] Copying: 317/1024 [MB] (11 MBps) [2024-11-27T22:06:12.372Z] Copying: 329/1024 [MB] (12 MBps) [2024-11-27T22:06:13.317Z] Copying: 341/1024 [MB] (12 MBps) [2024-11-27T22:06:14.260Z] Copying: 353/1024 [MB] (12 MBps) [2024-11-27T22:06:15.204Z] Copying: 364/1024 [MB] (11 MBps) [2024-11-27T22:06:16.149Z] Copying: 376/1024 [MB] (11 MBps) [2024-11-27T22:06:17.094Z] Copying: 388/1024 [MB] (11 MBps) [2024-11-27T22:06:18.482Z] Copying: 400/1024 [MB] (11 MBps) [2024-11-27T22:06:19.430Z] Copying: 412/1024 [MB] (11 MBps) [2024-11-27T22:06:20.380Z] Copying: 424/1024 [MB] (11 MBps) [2024-11-27T22:06:21.324Z] Copying: 436/1024 [MB] (11 MBps) [2024-11-27T22:06:22.267Z] Copying: 447/1024 [MB] (11 MBps) [2024-11-27T22:06:23.211Z] Copying: 459/1024 [MB] (11 MBps) [2024-11-27T22:06:24.159Z] Copying: 470/1024 [MB] (11 MBps) [2024-11-27T22:06:25.118Z] Copying: 482/1024 [MB] (11 MBps) [2024-11-27T22:06:26.508Z] Copying: 493/1024 [MB] (11 MBps) [2024-11-27T22:06:27.081Z] Copying: 504/1024 [MB] (10 MBps) [2024-11-27T22:06:28.467Z] Copying: 515/1024 [MB] (10 MBps) [2024-11-27T22:06:29.414Z] Copying: 525/1024 [MB] (10 MBps) [2024-11-27T22:06:30.358Z] Copying: 537/1024 [MB] (11 MBps) [2024-11-27T22:06:31.303Z] Copying: 548/1024 [MB] (10 MBps) [2024-11-27T22:06:32.247Z] Copying: 559/1024 [MB] (11 MBps) [2024-11-27T22:06:33.193Z] Copying: 571/1024 [MB] (11 MBps) [2024-11-27T22:06:34.138Z] Copying: 582/1024 [MB] (11 MBps) [2024-11-27T22:06:35.085Z] Copying: 606/1024 [MB] (23 MBps) [2024-11-27T22:06:36.476Z] Copying: 619/1024 [MB] (13 MBps) [2024-11-27T22:06:37.421Z] Copying: 630/1024 [MB] (10 MBps) [2024-11-27T22:06:38.368Z] Copying: 643/1024 [MB] (13 MBps) [2024-11-27T22:06:39.317Z] Copying: 663/1024 [MB] (20 MBps) [2024-11-27T22:06:40.262Z] Copying: 678/1024 [MB] (15 MBps) [2024-11-27T22:06:41.204Z] Copying: 694/1024 [MB] (15 MBps) [2024-11-27T22:06:42.149Z] Copying: 707/1024 [MB] (12 MBps) [2024-11-27T22:06:43.096Z] Copying: 720/1024 [MB] (13 MBps) [2024-11-27T22:06:44.488Z] Copying: 731/1024 [MB] (11 MBps) [2024-11-27T22:06:45.435Z] Copying: 747/1024 [MB] (15 MBps) [2024-11-27T22:06:46.381Z] Copying: 759/1024 [MB] (12 MBps) [2024-11-27T22:06:47.326Z] Copying: 778/1024 [MB] (19 MBps) [2024-11-27T22:06:48.272Z] Copying: 790/1024 [MB] (12 MBps) [2024-11-27T22:06:49.218Z] Copying: 806/1024 [MB] (15 MBps) [2024-11-27T22:06:50.172Z] Copying: 825/1024 [MB] (18 MBps) [2024-11-27T22:06:51.118Z] Copying: 847/1024 [MB] (22 MBps) [2024-11-27T22:06:52.508Z] Copying: 860/1024 [MB] (12 MBps) [2024-11-27T22:06:53.082Z] Copying: 870/1024 [MB] (10 MBps) [2024-11-27T22:06:54.474Z] Copying: 881/1024 [MB] (11 MBps) [2024-11-27T22:06:55.419Z] Copying: 892/1024 [MB] (11 MBps) [2024-11-27T22:06:56.459Z] Copying: 903/1024 [MB] (10 MBps) [2024-11-27T22:06:57.407Z] Copying: 914/1024 [MB] (10 MBps) [2024-11-27T22:06:58.352Z] Copying: 924/1024 [MB] (10 MBps) [2024-11-27T22:06:59.305Z] Copying: 936/1024 [MB] (11 MBps) [2024-11-27T22:07:00.255Z] Copying: 947/1024 [MB] (11 MBps) [2024-11-27T22:07:01.200Z] Copying: 957/1024 [MB] (10 MBps) [2024-11-27T22:07:02.146Z] Copying: 968/1024 [MB] (10 MBps) [2024-11-27T22:07:03.092Z] Copying: 979/1024 [MB] (10 MBps) [2024-11-27T22:07:04.482Z] Copying: 990/1024 [MB] (11 MBps) [2024-11-27T22:07:05.430Z] Copying: 1009/1024 [MB] (18 MBps) [2024-11-27T22:07:05.693Z] Copying: 1019/1024 [MB] (10 MBps) [2024-11-27T22:07:05.693Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-11-27 22:07:05.672136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.572 [2024-11-27 22:07:05.672536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:42.572 [2024-11-27 22:07:05.672770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:34:42.572 [2024-11-27 22:07:05.672818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.572 [2024-11-27 22:07:05.672902] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:42.572 [2024-11-27 22:07:05.674065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.572 [2024-11-27 22:07:05.674289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:42.572 [2024-11-27 22:07:05.674411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.092 ms 00:34:42.572 [2024-11-27 22:07:05.674465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.572 [2024-11-27 22:07:05.674877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.572 [2024-11-27 22:07:05.674920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:42.572 [2024-11-27 22:07:05.674954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:34:42.572 [2024-11-27 22:07:05.675057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.572 [2024-11-27 22:07:05.675142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.572 [2024-11-27 22:07:05.675179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:34:42.572 [2024-11-27 22:07:05.675213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:34:42.572 [2024-11-27 22:07:05.675316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.572 [2024-11-27 22:07:05.675459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.572 [2024-11-27 22:07:05.675502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:34:42.572 [2024-11-27 22:07:05.675534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:34:42.572 [2024-11-27 22:07:05.675566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.572 [2024-11-27 22:07:05.675676] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:42.572 [2024-11-27 22:07:05.675726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:34:42.572 [2024-11-27 22:07:05.675785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:42.572 [2024-11-27 22:07:05.675832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:42.572 [2024-11-27 22:07:05.675951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:42.572 [2024-11-27 22:07:05.676002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:42.572 [2024-11-27 22:07:05.676048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:42.572 [2024-11-27 22:07:05.676096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:42.572 [2024-11-27 22:07:05.676239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:42.572 [2024-11-27 22:07:05.676330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:42.572 [2024-11-27 22:07:05.676388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:42.572 [2024-11-27 22:07:05.676406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:42.572 [2024-11-27 22:07:05.676421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:42.572 [2024-11-27 22:07:05.676436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:42.572 [2024-11-27 22:07:05.676449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:42.572 [2024-11-27 22:07:05.676463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:42.572 [2024-11-27 22:07:05.676477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:42.572 [2024-11-27 22:07:05.676490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:42.572 [2024-11-27 22:07:05.676502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:42.572 [2024-11-27 22:07:05.676515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:42.572 [2024-11-27 22:07:05.676528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:42.572 [2024-11-27 22:07:05.676541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:42.572 [2024-11-27 22:07:05.676555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:42.572 [2024-11-27 22:07:05.676613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:42.572 [2024-11-27 22:07:05.676627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:42.572 [2024-11-27 22:07:05.676640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:42.572 [2024-11-27 22:07:05.676652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:42.572 [2024-11-27 22:07:05.676665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:42.572 [2024-11-27 22:07:05.676679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:42.572 [2024-11-27 22:07:05.676692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:42.572 [2024-11-27 22:07:05.676705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:42.572 [2024-11-27 22:07:05.676718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:42.572 [2024-11-27 22:07:05.676731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:42.572 [2024-11-27 22:07:05.676744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:42.572 [2024-11-27 22:07:05.676757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:42.572 [2024-11-27 22:07:05.676770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:42.572 [2024-11-27 22:07:05.676783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:42.572 [2024-11-27 22:07:05.676796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:42.572 [2024-11-27 22:07:05.676809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:42.572 [2024-11-27 22:07:05.676822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:42.572 [2024-11-27 22:07:05.676838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:42.572 [2024-11-27 22:07:05.676851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.676864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.676879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.676892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.676905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.676918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.676930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.676967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.676977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.676987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.676996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:42.573 [2024-11-27 22:07:05.677921] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:42.573 [2024-11-27 22:07:05.677930] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 97042294-04eb-4fc0-af05-f54f26f34808 00:34:42.573 [2024-11-27 22:07:05.677940] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:34:42.573 [2024-11-27 22:07:05.677948] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 3360 00:34:42.573 [2024-11-27 22:07:05.677956] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 3328 00:34:42.573 [2024-11-27 22:07:05.677969] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0096 00:34:42.573 [2024-11-27 22:07:05.677979] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:42.573 [2024-11-27 22:07:05.677988] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:42.573 [2024-11-27 22:07:05.678000] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:42.573 [2024-11-27 22:07:05.678007] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:42.573 [2024-11-27 22:07:05.678014] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:42.573 [2024-11-27 22:07:05.678023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.573 [2024-11-27 22:07:05.678032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:42.574 [2024-11-27 22:07:05.678041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.349 ms 00:34:42.574 [2024-11-27 22:07:05.678050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.574 [2024-11-27 22:07:05.681434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.574 [2024-11-27 22:07:05.681485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:42.574 [2024-11-27 22:07:05.681496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.355 ms 00:34:42.574 [2024-11-27 22:07:05.681511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.574 [2024-11-27 22:07:05.681683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.574 [2024-11-27 22:07:05.681693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:42.574 [2024-11-27 22:07:05.681703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:34:42.574 [2024-11-27 22:07:05.681711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.836 [2024-11-27 22:07:05.692225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.836 [2024-11-27 22:07:05.692271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:42.836 [2024-11-27 22:07:05.692283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.836 [2024-11-27 22:07:05.692291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.836 [2024-11-27 22:07:05.692396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.836 [2024-11-27 22:07:05.692406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:42.836 [2024-11-27 22:07:05.692416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.836 [2024-11-27 22:07:05.692426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.836 [2024-11-27 22:07:05.692492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.836 [2024-11-27 22:07:05.692533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:42.836 [2024-11-27 22:07:05.692542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.836 [2024-11-27 22:07:05.692551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.836 [2024-11-27 22:07:05.692587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.836 [2024-11-27 22:07:05.692598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:42.836 [2024-11-27 22:07:05.692608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.836 [2024-11-27 22:07:05.692627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.836 [2024-11-27 22:07:05.711557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.836 [2024-11-27 22:07:05.711794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:42.836 [2024-11-27 22:07:05.711815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.836 [2024-11-27 22:07:05.711824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.836 [2024-11-27 22:07:05.727117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.836 [2024-11-27 22:07:05.727330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:42.836 [2024-11-27 22:07:05.727384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.836 [2024-11-27 22:07:05.727395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.836 [2024-11-27 22:07:05.727454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.836 [2024-11-27 22:07:05.727465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:42.836 [2024-11-27 22:07:05.727482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.836 [2024-11-27 22:07:05.727491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.836 [2024-11-27 22:07:05.727532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.836 [2024-11-27 22:07:05.727542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:42.836 [2024-11-27 22:07:05.727551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.836 [2024-11-27 22:07:05.727566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.836 [2024-11-27 22:07:05.727632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.836 [2024-11-27 22:07:05.727643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:42.836 [2024-11-27 22:07:05.727652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.836 [2024-11-27 22:07:05.727664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.836 [2024-11-27 22:07:05.727697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.836 [2024-11-27 22:07:05.727708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:42.836 [2024-11-27 22:07:05.727718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.836 [2024-11-27 22:07:05.727727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.836 [2024-11-27 22:07:05.727778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.836 [2024-11-27 22:07:05.727789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:42.836 [2024-11-27 22:07:05.727798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.836 [2024-11-27 22:07:05.727810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.836 [2024-11-27 22:07:05.727867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.836 [2024-11-27 22:07:05.727879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:42.836 [2024-11-27 22:07:05.727889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.836 [2024-11-27 22:07:05.727905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.836 [2024-11-27 22:07:05.728072] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 55.901 ms, result 0 00:34:43.098 00:34:43.098 00:34:43.098 22:07:06 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:45.647 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:34:45.647 22:07:08 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:34:45.647 22:07:08 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:34:45.647 22:07:08 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:34:45.647 22:07:08 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:45.647 22:07:08 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:34:45.647 22:07:08 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 94295 00:34:45.647 22:07:08 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 94295 ']' 00:34:45.647 22:07:08 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 94295 00:34:45.647 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (94295) - No such process 00:34:45.647 Process with pid 94295 is not found 00:34:45.647 Remove shared memory files 00:34:45.647 22:07:08 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 94295 is not found' 00:34:45.647 22:07:08 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:34:45.647 22:07:08 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:45.647 22:07:08 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:34:45.647 22:07:08 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_97042294-04eb-4fc0-af05-f54f26f34808_band_md /dev/hugepages/ftl_97042294-04eb-4fc0-af05-f54f26f34808_l2p_l1 /dev/hugepages/ftl_97042294-04eb-4fc0-af05-f54f26f34808_l2p_l2 /dev/hugepages/ftl_97042294-04eb-4fc0-af05-f54f26f34808_l2p_l2_ctx /dev/hugepages/ftl_97042294-04eb-4fc0-af05-f54f26f34808_nvc_md /dev/hugepages/ftl_97042294-04eb-4fc0-af05-f54f26f34808_p2l_pool /dev/hugepages/ftl_97042294-04eb-4fc0-af05-f54f26f34808_sb /dev/hugepages/ftl_97042294-04eb-4fc0-af05-f54f26f34808_sb_shm /dev/hugepages/ftl_97042294-04eb-4fc0-af05-f54f26f34808_trim_bitmap /dev/hugepages/ftl_97042294-04eb-4fc0-af05-f54f26f34808_trim_log /dev/hugepages/ftl_97042294-04eb-4fc0-af05-f54f26f34808_trim_md /dev/hugepages/ftl_97042294-04eb-4fc0-af05-f54f26f34808_vmap 00:34:45.647 22:07:08 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:34:45.647 22:07:08 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:45.647 22:07:08 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:34:45.647 00:34:45.647 real 4m54.765s 00:34:45.647 user 4m42.894s 00:34:45.647 sys 0m11.346s 00:34:45.647 ************************************ 00:34:45.647 END TEST ftl_restore_fast 00:34:45.647 ************************************ 00:34:45.647 22:07:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:45.647 22:07:08 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:34:45.647 Process with pid 85887 is not found 00:34:45.647 22:07:08 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:34:45.647 22:07:08 ftl -- ftl/ftl.sh@14 -- # killprocess 85887 00:34:45.647 22:07:08 ftl -- common/autotest_common.sh@954 -- # '[' -z 85887 ']' 00:34:45.647 22:07:08 ftl -- common/autotest_common.sh@958 -- # kill -0 85887 00:34:45.647 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (85887) - No such process 00:34:45.647 22:07:08 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 85887 is not found' 00:34:45.647 22:07:08 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:34:45.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:45.647 22:07:08 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=97296 00:34:45.647 22:07:08 ftl -- ftl/ftl.sh@20 -- # waitforlisten 97296 00:34:45.647 22:07:08 ftl -- common/autotest_common.sh@835 -- # '[' -z 97296 ']' 00:34:45.647 22:07:08 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:45.647 22:07:08 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:45.647 22:07:08 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:45.647 22:07:08 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:45.647 22:07:08 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:45.647 22:07:08 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:45.647 [2024-11-27 22:07:08.584630] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:34:45.647 [2024-11-27 22:07:08.585066] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97296 ] 00:34:45.647 [2024-11-27 22:07:08.732884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:45.908 [2024-11-27 22:07:08.775088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:46.481 22:07:09 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:46.481 22:07:09 ftl -- common/autotest_common.sh@868 -- # return 0 00:34:46.481 22:07:09 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:34:46.742 nvme0n1 00:34:46.742 22:07:09 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:34:46.742 22:07:09 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:46.742 22:07:09 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:34:47.004 22:07:09 ftl -- ftl/common.sh@28 -- # stores=c7a5cec2-964a-4cc8-8d37-75fe3f1f1d47 00:34:47.004 22:07:09 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:34:47.004 22:07:09 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c7a5cec2-964a-4cc8-8d37-75fe3f1f1d47 00:34:47.265 22:07:10 ftl -- ftl/ftl.sh@23 -- # killprocess 97296 00:34:47.265 22:07:10 ftl -- common/autotest_common.sh@954 -- # '[' -z 97296 ']' 00:34:47.265 22:07:10 ftl -- common/autotest_common.sh@958 -- # kill -0 97296 00:34:47.265 22:07:10 ftl -- common/autotest_common.sh@959 -- # uname 00:34:47.265 22:07:10 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:47.265 22:07:10 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97296 00:34:47.265 killing process with pid 97296 00:34:47.265 22:07:10 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:47.265 22:07:10 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:47.265 22:07:10 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97296' 00:34:47.265 22:07:10 ftl -- common/autotest_common.sh@973 -- # kill 97296 00:34:47.265 22:07:10 ftl -- common/autotest_common.sh@978 -- # wait 97296 00:34:47.837 22:07:10 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:34:47.837 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:48.098 Waiting for block devices as requested 00:34:48.098 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:34:48.098 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:34:48.098 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:34:48.359 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:34:53.653 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:34:53.653 22:07:16 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:34:53.653 Remove shared memory files 00:34:53.653 22:07:16 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:53.653 22:07:16 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:34:53.653 22:07:16 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:34:53.653 22:07:16 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:34:53.653 22:07:16 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:53.653 22:07:16 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:34:53.653 ************************************ 00:34:53.653 END TEST ftl 00:34:53.653 ************************************ 00:34:53.653 00:34:53.653 real 17m11.616s 00:34:53.653 user 19m0.074s 00:34:53.653 sys 1m17.672s 00:34:53.653 22:07:16 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:53.653 22:07:16 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:53.653 22:07:16 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:53.653 22:07:16 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:34:53.653 22:07:16 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:34:53.653 22:07:16 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:34:53.653 22:07:16 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:34:53.653 22:07:16 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:34:53.653 22:07:16 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:34:53.653 22:07:16 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:34:53.653 22:07:16 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:34:53.653 22:07:16 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:34:53.653 22:07:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:53.653 22:07:16 -- common/autotest_common.sh@10 -- # set +x 00:34:53.653 22:07:16 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:34:53.653 22:07:16 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:34:53.653 22:07:16 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:34:53.653 22:07:16 -- common/autotest_common.sh@10 -- # set +x 00:34:55.041 INFO: APP EXITING 00:34:55.041 INFO: killing all VMs 00:34:55.041 INFO: killing vhost app 00:34:55.041 INFO: EXIT DONE 00:34:55.302 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:55.564 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:34:55.564 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:34:55.564 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:34:55.825 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:34:56.087 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:56.348 Cleaning 00:34:56.348 Removing: /var/run/dpdk/spdk0/config 00:34:56.348 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:56.348 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:56.348 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:56.348 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:56.348 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:56.612 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:56.612 Removing: /var/run/dpdk/spdk0 00:34:56.612 Removing: /var/run/dpdk/spdk_pid68895 00:34:56.612 Removing: /var/run/dpdk/spdk_pid69053 00:34:56.612 Removing: /var/run/dpdk/spdk_pid69249 00:34:56.612 Removing: /var/run/dpdk/spdk_pid69337 00:34:56.612 Removing: /var/run/dpdk/spdk_pid69360 00:34:56.612 Removing: /var/run/dpdk/spdk_pid69471 00:34:56.612 Removing: /var/run/dpdk/spdk_pid69489 00:34:56.612 Removing: /var/run/dpdk/spdk_pid69666 00:34:56.612 Removing: /var/run/dpdk/spdk_pid69740 00:34:56.612 Removing: /var/run/dpdk/spdk_pid69819 00:34:56.612 Removing: /var/run/dpdk/spdk_pid69914 00:34:56.612 Removing: /var/run/dpdk/spdk_pid69994 00:34:56.612 Removing: /var/run/dpdk/spdk_pid70034 00:34:56.612 Removing: /var/run/dpdk/spdk_pid70065 00:34:56.612 Removing: /var/run/dpdk/spdk_pid70135 00:34:56.612 Removing: /var/run/dpdk/spdk_pid70214 00:34:56.612 Removing: /var/run/dpdk/spdk_pid70633 00:34:56.612 Removing: /var/run/dpdk/spdk_pid70681 00:34:56.612 Removing: /var/run/dpdk/spdk_pid70722 00:34:56.612 Removing: /var/run/dpdk/spdk_pid70738 00:34:56.612 Removing: /var/run/dpdk/spdk_pid70801 00:34:56.612 Removing: /var/run/dpdk/spdk_pid70812 00:34:56.612 Removing: /var/run/dpdk/spdk_pid70870 00:34:56.612 Removing: /var/run/dpdk/spdk_pid70886 00:34:56.612 Removing: /var/run/dpdk/spdk_pid70928 00:34:56.612 Removing: /var/run/dpdk/spdk_pid70946 00:34:56.612 Removing: /var/run/dpdk/spdk_pid70988 00:34:56.612 Removing: /var/run/dpdk/spdk_pid71006 00:34:56.612 Removing: /var/run/dpdk/spdk_pid71133 00:34:56.612 Removing: /var/run/dpdk/spdk_pid71164 00:34:56.612 Removing: /var/run/dpdk/spdk_pid71253 00:34:56.612 Removing: /var/run/dpdk/spdk_pid71414 00:34:56.612 Removing: /var/run/dpdk/spdk_pid71484 00:34:56.612 Removing: /var/run/dpdk/spdk_pid71507 00:34:56.612 Removing: /var/run/dpdk/spdk_pid71928 00:34:56.612 Removing: /var/run/dpdk/spdk_pid72015 00:34:56.612 Removing: /var/run/dpdk/spdk_pid72113 00:34:56.612 Removing: /var/run/dpdk/spdk_pid72144 00:34:56.612 Removing: /var/run/dpdk/spdk_pid72177 00:34:56.612 Removing: /var/run/dpdk/spdk_pid72250 00:34:56.612 Removing: /var/run/dpdk/spdk_pid72861 00:34:56.612 Removing: /var/run/dpdk/spdk_pid72886 00:34:56.612 Removing: /var/run/dpdk/spdk_pid73342 00:34:56.612 Removing: /var/run/dpdk/spdk_pid73429 00:34:56.612 Removing: /var/run/dpdk/spdk_pid73540 00:34:56.612 Removing: /var/run/dpdk/spdk_pid73577 00:34:56.612 Removing: /var/run/dpdk/spdk_pid73602 00:34:56.612 Removing: /var/run/dpdk/spdk_pid73622 00:34:56.612 Removing: /var/run/dpdk/spdk_pid75439 00:34:56.612 Removing: /var/run/dpdk/spdk_pid75560 00:34:56.612 Removing: /var/run/dpdk/spdk_pid75564 00:34:56.612 Removing: /var/run/dpdk/spdk_pid75581 00:34:56.612 Removing: /var/run/dpdk/spdk_pid75626 00:34:56.612 Removing: /var/run/dpdk/spdk_pid75630 00:34:56.612 Removing: /var/run/dpdk/spdk_pid75642 00:34:56.612 Removing: /var/run/dpdk/spdk_pid75681 00:34:56.612 Removing: /var/run/dpdk/spdk_pid75685 00:34:56.612 Removing: /var/run/dpdk/spdk_pid75697 00:34:56.612 Removing: /var/run/dpdk/spdk_pid75742 00:34:56.612 Removing: /var/run/dpdk/spdk_pid75746 00:34:56.612 Removing: /var/run/dpdk/spdk_pid75758 00:34:56.612 Removing: /var/run/dpdk/spdk_pid77151 00:34:56.612 Removing: /var/run/dpdk/spdk_pid77237 00:34:56.612 Removing: /var/run/dpdk/spdk_pid78633 00:34:56.612 Removing: /var/run/dpdk/spdk_pid80362 00:34:56.612 Removing: /var/run/dpdk/spdk_pid80425 00:34:56.612 Removing: /var/run/dpdk/spdk_pid80490 00:34:56.612 Removing: /var/run/dpdk/spdk_pid80591 00:34:56.612 Removing: /var/run/dpdk/spdk_pid80674 00:34:56.612 Removing: /var/run/dpdk/spdk_pid80762 00:34:56.612 Removing: /var/run/dpdk/spdk_pid80818 00:34:56.612 Removing: /var/run/dpdk/spdk_pid80883 00:34:56.612 Removing: /var/run/dpdk/spdk_pid80982 00:34:56.612 Removing: /var/run/dpdk/spdk_pid81063 00:34:56.612 Removing: /var/run/dpdk/spdk_pid81153 00:34:56.612 Removing: /var/run/dpdk/spdk_pid81205 00:34:56.612 Removing: /var/run/dpdk/spdk_pid81275 00:34:56.612 Removing: /var/run/dpdk/spdk_pid81373 00:34:56.612 Removing: /var/run/dpdk/spdk_pid81454 00:34:56.612 Removing: /var/run/dpdk/spdk_pid81549 00:34:56.612 Removing: /var/run/dpdk/spdk_pid81601 00:34:56.612 Removing: /var/run/dpdk/spdk_pid81671 00:34:56.612 Removing: /var/run/dpdk/spdk_pid81764 00:34:56.612 Removing: /var/run/dpdk/spdk_pid81850 00:34:56.612 Removing: /var/run/dpdk/spdk_pid81935 00:34:56.612 Removing: /var/run/dpdk/spdk_pid81991 00:34:56.612 Removing: /var/run/dpdk/spdk_pid82060 00:34:56.612 Removing: /var/run/dpdk/spdk_pid82124 00:34:56.612 Removing: /var/run/dpdk/spdk_pid82187 00:34:56.612 Removing: /var/run/dpdk/spdk_pid82285 00:34:56.612 Removing: /var/run/dpdk/spdk_pid82365 00:34:56.612 Removing: /var/run/dpdk/spdk_pid82454 00:34:56.612 Removing: /var/run/dpdk/spdk_pid82506 00:34:56.612 Removing: /var/run/dpdk/spdk_pid82570 00:34:56.612 Removing: /var/run/dpdk/spdk_pid82638 00:34:56.874 Removing: /var/run/dpdk/spdk_pid82701 00:34:56.874 Removing: /var/run/dpdk/spdk_pid82799 00:34:56.874 Removing: /var/run/dpdk/spdk_pid82884 00:34:56.874 Removing: /var/run/dpdk/spdk_pid83018 00:34:56.874 Removing: /var/run/dpdk/spdk_pid83285 00:34:56.874 Removing: /var/run/dpdk/spdk_pid83311 00:34:56.874 Removing: /var/run/dpdk/spdk_pid83750 00:34:56.874 Removing: /var/run/dpdk/spdk_pid83930 00:34:56.874 Removing: /var/run/dpdk/spdk_pid84022 00:34:56.874 Removing: /var/run/dpdk/spdk_pid84129 00:34:56.874 Removing: /var/run/dpdk/spdk_pid84166 00:34:56.874 Removing: /var/run/dpdk/spdk_pid84191 00:34:56.874 Removing: /var/run/dpdk/spdk_pid84486 00:34:56.874 Removing: /var/run/dpdk/spdk_pid84524 00:34:56.874 Removing: /var/run/dpdk/spdk_pid84578 00:34:56.874 Removing: /var/run/dpdk/spdk_pid84948 00:34:56.874 Removing: /var/run/dpdk/spdk_pid85092 00:34:56.874 Removing: /var/run/dpdk/spdk_pid85887 00:34:56.874 Removing: /var/run/dpdk/spdk_pid86009 00:34:56.874 Removing: /var/run/dpdk/spdk_pid86162 00:34:56.874 Removing: /var/run/dpdk/spdk_pid86248 00:34:56.874 Removing: /var/run/dpdk/spdk_pid86547 00:34:56.874 Removing: /var/run/dpdk/spdk_pid86800 00:34:56.874 Removing: /var/run/dpdk/spdk_pid87141 00:34:56.874 Removing: /var/run/dpdk/spdk_pid87301 00:34:56.874 Removing: /var/run/dpdk/spdk_pid87420 00:34:56.874 Removing: /var/run/dpdk/spdk_pid87456 00:34:56.874 Removing: /var/run/dpdk/spdk_pid87616 00:34:56.874 Removing: /var/run/dpdk/spdk_pid87630 00:34:56.874 Removing: /var/run/dpdk/spdk_pid87666 00:34:56.874 Removing: /var/run/dpdk/spdk_pid87936 00:34:56.874 Removing: /var/run/dpdk/spdk_pid88156 00:34:56.874 Removing: /var/run/dpdk/spdk_pid88689 00:34:56.874 Removing: /var/run/dpdk/spdk_pid89458 00:34:56.874 Removing: /var/run/dpdk/spdk_pid90039 00:34:56.874 Removing: /var/run/dpdk/spdk_pid90838 00:34:56.874 Removing: /var/run/dpdk/spdk_pid90980 00:34:56.874 Removing: /var/run/dpdk/spdk_pid91057 00:34:56.874 Removing: /var/run/dpdk/spdk_pid91476 00:34:56.874 Removing: /var/run/dpdk/spdk_pid91530 00:34:56.874 Removing: /var/run/dpdk/spdk_pid92061 00:34:56.874 Removing: /var/run/dpdk/spdk_pid92533 00:34:56.874 Removing: /var/run/dpdk/spdk_pid93354 00:34:56.874 Removing: /var/run/dpdk/spdk_pid93482 00:34:56.874 Removing: /var/run/dpdk/spdk_pid93514 00:34:56.875 Removing: /var/run/dpdk/spdk_pid93567 00:34:56.875 Removing: /var/run/dpdk/spdk_pid93617 00:34:56.875 Removing: /var/run/dpdk/spdk_pid93671 00:34:56.875 Removing: /var/run/dpdk/spdk_pid93846 00:34:56.875 Removing: /var/run/dpdk/spdk_pid93925 00:34:56.875 Removing: /var/run/dpdk/spdk_pid93982 00:34:56.875 Removing: /var/run/dpdk/spdk_pid94071 00:34:56.875 Removing: /var/run/dpdk/spdk_pid94106 00:34:56.875 Removing: /var/run/dpdk/spdk_pid94156 00:34:56.875 Removing: /var/run/dpdk/spdk_pid94295 00:34:56.875 Removing: /var/run/dpdk/spdk_pid94481 00:34:56.875 Removing: /var/run/dpdk/spdk_pid94988 00:34:56.875 Removing: /var/run/dpdk/spdk_pid95630 00:34:56.875 Removing: /var/run/dpdk/spdk_pid96412 00:34:56.875 Removing: /var/run/dpdk/spdk_pid97296 00:34:56.875 Clean 00:34:56.875 22:07:19 -- common/autotest_common.sh@1453 -- # return 0 00:34:56.875 22:07:19 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:34:56.875 22:07:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:56.875 22:07:19 -- common/autotest_common.sh@10 -- # set +x 00:34:57.137 22:07:19 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:34:57.137 22:07:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:57.137 22:07:19 -- common/autotest_common.sh@10 -- # set +x 00:34:57.137 22:07:20 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:57.137 22:07:20 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:34:57.137 22:07:20 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:34:57.137 22:07:20 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:34:57.137 22:07:20 -- spdk/autotest.sh@398 -- # hostname 00:34:57.137 22:07:20 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:34:57.137 geninfo: WARNING: invalid characters removed from testname! 00:35:23.732 22:07:45 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:26.289 22:07:48 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:28.877 22:07:51 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:30.792 22:07:53 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:32.177 22:07:55 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:34.092 22:07:56 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:36.637 22:07:59 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:36.637 22:07:59 -- spdk/autorun.sh@1 -- $ timing_finish 00:35:36.637 22:07:59 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:35:36.637 22:07:59 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:36.637 22:07:59 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:35:36.637 22:07:59 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:35:36.637 + [[ -n 5763 ]] 00:35:36.637 + sudo kill 5763 00:35:36.649 [Pipeline] } 00:35:36.671 [Pipeline] // timeout 00:35:36.679 [Pipeline] } 00:35:36.697 [Pipeline] // stage 00:35:36.704 [Pipeline] } 00:35:36.721 [Pipeline] // catchError 00:35:36.732 [Pipeline] stage 00:35:36.734 [Pipeline] { (Stop VM) 00:35:36.750 [Pipeline] sh 00:35:37.039 + vagrant halt 00:35:39.590 ==> default: Halting domain... 00:35:46.200 [Pipeline] sh 00:35:46.502 + vagrant destroy -f 00:35:49.047 ==> default: Removing domain... 00:35:49.628 [Pipeline] sh 00:35:49.910 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:35:49.920 [Pipeline] } 00:35:49.935 [Pipeline] // stage 00:35:49.941 [Pipeline] } 00:35:49.956 [Pipeline] // dir 00:35:49.962 [Pipeline] } 00:35:49.977 [Pipeline] // wrap 00:35:49.984 [Pipeline] } 00:35:49.997 [Pipeline] // catchError 00:35:50.007 [Pipeline] stage 00:35:50.009 [Pipeline] { (Epilogue) 00:35:50.024 [Pipeline] sh 00:35:50.309 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:55.595 [Pipeline] catchError 00:35:55.598 [Pipeline] { 00:35:55.613 [Pipeline] sh 00:35:55.900 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:55.900 Artifacts sizes are good 00:35:55.911 [Pipeline] } 00:35:55.929 [Pipeline] // catchError 00:35:55.943 [Pipeline] archiveArtifacts 00:35:55.953 Archiving artifacts 00:35:56.077 [Pipeline] cleanWs 00:35:56.107 [WS-CLEANUP] Deleting project workspace... 00:35:56.107 [WS-CLEANUP] Deferred wipeout is used... 00:35:56.141 [WS-CLEANUP] done 00:35:56.143 [Pipeline] } 00:35:56.163 [Pipeline] // stage 00:35:56.169 [Pipeline] } 00:35:56.200 [Pipeline] // node 00:35:56.212 [Pipeline] End of Pipeline 00:35:56.265 Finished: SUCCESS